OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Sun, 14 May 2017 10:02:19 -0500Can i get 2d world coordinates from a single image ([u,v] coords)?http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/I'm new to everything vision!
I've read some similar posts but I can't get my head around it. Here's what I'm trying to do:
-I have a single camera looking down on a surface (we can assume it's normal to the surface) and I want to detect a hole and obtain it's [X,Y] from a world coordinate I have determined.
-I do not care about the -Z info, I know the real hole diameter.
I have calibrated my camera and have the intrinsic matrix. For the extrinsic info I have taken 4 points from my image(mouse callback) and their correspondent 4-real points from the world coordinate to use 'solvePnP' and 'Rodrigues'(for the rotation matrix).
I can detect the hole using HoughCircles and I have the center in [u,v] coords.
Now, is there a way to obtain that [u,v] point in [X,Y] in my defined coordinate system???Fri, 24 Jul 2015 13:20:56 -0500http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/Answer by mmo for <p>I'm new to everything vision!</p>
<p>I've read some similar posts but I can't get my head around it. Here's what I'm trying to do:</p>
<p>-I have a single camera looking down on a surface (we can assume it's normal to the surface) and I want to detect a hole and obtain it's [X,Y] from a world coordinate I have determined.
-I do not care about the -Z info, I know the real hole diameter.</p>
<p>I have calibrated my camera and have the intrinsic matrix. For the extrinsic info I have taken 4 points from my image(mouse callback) and their correspondent 4-real points from the world coordinate to use 'solvePnP' and 'Rodrigues'(for the rotation matrix).</p>
<p>I can detect the hole using HoughCircles and I have the center in [u,v] coords.</p>
<p>Now, is there a way to obtain that [u,v] point in [X,Y] in my defined coordinate system???</p>
http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?answer=68891#post-id-68891Thanks for the answer! Very helpuful. Sorry I took a long time to post how I solved my problem. I ended up using some of your equations and an additional one.
- From the camera intrinsic parameters I obtained: fx, fy, cx, and cy.
- From previous measurements I have the [object points] and their correspondent [image points] (used in solvePnP). I obtained the [image points] with 'mouse callback' function.
- 'Rodrigues' function gets me the Rotation Matrix with the 'rvec' from 'solvePnP'.
- 'HoughCircles' gives me the [u,v] of the hole center I'm identifying.
- Converted the 2D image coordinate of the hole in the normalized camera frame. [x', y', 1] (above equation)
- I made the scale factor 's' the distance from the surface to the camera sensor.
- Multiply 's' by the normalize camera sensor. (equation above).
- Finally, I use this equation to transfom the 3D coordinate in camera frame to my world frame.
'R^T' is the transposed rotation matrix and 't' is tvec.
![image description](/upfiles/14399118989997775.png)
Results where pretty accurate. I've changed some things since then and I don't remember the correct tolerance but it was better than I expected. I think a different lens might help to, mine has a fish eye distortion and even the 'undistort' function doesn't fix it completely. The edges are no good.
Thanks again.Tue, 18 Aug 2015 10:36:21 -0500http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?answer=68891#post-id-68891Answer by Pedro Batista for <p>I'm new to everything vision!</p>
<p>I've read some similar posts but I can't get my head around it. Here's what I'm trying to do:</p>
<p>-I have a single camera looking down on a surface (we can assume it's normal to the surface) and I want to detect a hole and obtain it's [X,Y] from a world coordinate I have determined.
-I do not care about the -Z info, I know the real hole diameter.</p>
<p>I have calibrated my camera and have the intrinsic matrix. For the extrinsic info I have taken 4 points from my image(mouse callback) and their correspondent 4-real points from the world coordinate to use 'solvePnP' and 'Rodrigues'(for the rotation matrix).</p>
<p>I can detect the hole using HoughCircles and I have the center in [u,v] coords.</p>
<p>Now, is there a way to obtain that [u,v] point in [X,Y] in my defined coordinate system???</p>
http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?answer=114407#post-id-114407Couldn't write this up in a comment, so I'll make it an answer instead even though this is a question.
I calibrated my camera and obtain the cameraMatrix and distortion coefficients. I believe they are correct because if I undistort my image it will look good.
I'm attempting to transform image coordinates to plane coordinates in the following setup:
![image description](https://s18.postimg.org/4kuwnzkix/Setup.png)
Each black square is one corner of my calibrated surface. I know the image coordinates of each of these squares, in my current calibration they are:
TopLeft: [170, 243]
TopRight: [402, 238]
BottomLeft: [82, 383]
BottomRight: [513, 346]
So, now I want a new coordinate system, so that
TopLeft: [0, 0]
TopRight: [0, 1]
BottomLeft: [1, 0]
BottomRight: [1, 1]
So, now I want to detect objects inside this area, and transform their images coordinates to my [0,0] - [1,1] system.
I used these four corners as input for the cv::solvePnP function and then used cv::rodrigues to obtain the rotation matrix. In this current calibration, these are my results:
translation vec:
[-0.6335161884885361;
0.0327718985712979;
2.090753021694066]
Rotation Mat:
[0.994295199236491, 0.1037446225804736, 0.02478124413546758;
-0.05350435092326501, 0.2841225297901329, 0.9572939321326209;
0.09227318791256028, -0.9531586653602274, 0.2880524560582033]
So, I would think that this would be enough for me to transform any image point inside that area and turn it into [0; 1] coordinates. To test, I tried to make the calculations for the image points that I already know (the corners).
If I try to use these matrices to transform the TopLeft corner [170, 243], I should be able to get [0, 0] result. But this is not what I am getting, so obviously I am missing something along the process, since I know that this task is possible to achieve. Wed, 23 Nov 2016 11:53:29 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?answer=114407#post-id-114407Comment by Eduardo for <p>Couldn't write this up in a comment, so I'll make it an answer instead even though this is a question.</p>
<p>I calibrated my camera and obtain the cameraMatrix and distortion coefficients. I believe they are correct because if I undistort my image it will look good.</p>
<p>I'm attempting to transform image coordinates to plane coordinates in the following setup: </p>
<p><img alt="image description" src="https://s18.postimg.org/4kuwnzkix/Setup.png"/></p>
<p>Each black square is one corner of my calibrated surface. I know the image coordinates of each of these squares, in my current calibration they are:</p>
<pre><code>TopLeft: [170, 243]
TopRight: [402, 238]
BottomLeft: [82, 383]
BottomRight: [513, 346]
</code></pre>
<p>So, now I want a new coordinate system, so that </p>
<pre><code>TopLeft: [0, 0]
TopRight: [0, 1]
BottomLeft: [1, 0]
BottomRight: [1, 1]
</code></pre>
<p>So, now I want to detect objects inside this area, and transform their images coordinates to my [0,0] - [1,1] system.</p>
<p>I used these four corners as input for the cv::solvePnP function and then used cv::rodrigues to obtain the rotation matrix. In this current calibration, these are my results:</p>
<pre><code>translation vec:
[-0.6335161884885361;
0.0327718985712979;
2.090753021694066]
Rotation Mat:
[0.994295199236491, 0.1037446225804736, 0.02478124413546758;
-0.05350435092326501, 0.2841225297901329, 0.9572939321326209;
0.09227318791256028, -0.9531586653602274, 0.2880524560582033]
</code></pre>
<p>So, I would think that this would be enough for me to transform any image point inside that area and turn it into [0; 1] coordinates. To test, I tried to make the calculations for the image points that I already know (the corners). </p>
<p>If I try to use these matrices to transform the TopLeft corner [170, 243], I should be able to get [0, 0] result. But this is not what I am getting, so obviously I am missing something along the process, since I know that this task is possible to achieve. </p>
http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114512#post-id-114512So, you are saying that if you do: [u, v, 1]^T = K . (R | t) . [0, 0, 0, 1]^T is not equal to [170, 243, 1]?
If so, the ouput of cv::solvePnP is not correct. Instead, you could try to use the chessboard image (more points).Thu, 24 Nov 2016 03:33:34 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114512#post-id-114512Comment by Pedro Batista for <p>Couldn't write this up in a comment, so I'll make it an answer instead even though this is a question.</p>
<p>I calibrated my camera and obtain the cameraMatrix and distortion coefficients. I believe they are correct because if I undistort my image it will look good.</p>
<p>I'm attempting to transform image coordinates to plane coordinates in the following setup: </p>
<p><img alt="image description" src="https://s18.postimg.org/4kuwnzkix/Setup.png"/></p>
<p>Each black square is one corner of my calibrated surface. I know the image coordinates of each of these squares, in my current calibration they are:</p>
<pre><code>TopLeft: [170, 243]
TopRight: [402, 238]
BottomLeft: [82, 383]
BottomRight: [513, 346]
</code></pre>
<p>So, now I want a new coordinate system, so that </p>
<pre><code>TopLeft: [0, 0]
TopRight: [0, 1]
BottomLeft: [1, 0]
BottomRight: [1, 1]
</code></pre>
<p>So, now I want to detect objects inside this area, and transform their images coordinates to my [0,0] - [1,1] system.</p>
<p>I used these four corners as input for the cv::solvePnP function and then used cv::rodrigues to obtain the rotation matrix. In this current calibration, these are my results:</p>
<pre><code>translation vec:
[-0.6335161884885361;
0.0327718985712979;
2.090753021694066]
Rotation Mat:
[0.994295199236491, 0.1037446225804736, 0.02478124413546758;
-0.05350435092326501, 0.2841225297901329, 0.9572939321326209;
0.09227318791256028, -0.9531586653602274, 0.2880524560582033]
</code></pre>
<p>So, I would think that this would be enough for me to transform any image point inside that area and turn it into [0; 1] coordinates. To test, I tried to make the calculations for the image points that I already know (the corners). </p>
<p>If I try to use these matrices to transform the TopLeft corner [170, 243], I should be able to get [0, 0] result. But this is not what I am getting, so obviously I am missing something along the process, since I know that this task is possible to achieve. </p>
http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114578#post-id-114578No, it is not perfectly square, it is just a manually made square.
I used the chessboard to get the intrinsics. You are telling me to lay the chessboard on the surface and let the calibrator calculate the rvec and tvec?Thu, 24 Nov 2016 09:29:13 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114578#post-id-114578Comment by Pedro Batista for <p>Couldn't write this up in a comment, so I'll make it an answer instead even though this is a question.</p>
<p>I calibrated my camera and obtain the cameraMatrix and distortion coefficients. I believe they are correct because if I undistort my image it will look good.</p>
<p>I'm attempting to transform image coordinates to plane coordinates in the following setup: </p>
<p><img alt="image description" src="https://s18.postimg.org/4kuwnzkix/Setup.png"/></p>
<p>Each black square is one corner of my calibrated surface. I know the image coordinates of each of these squares, in my current calibration they are:</p>
<pre><code>TopLeft: [170, 243]
TopRight: [402, 238]
BottomLeft: [82, 383]
BottomRight: [513, 346]
</code></pre>
<p>So, now I want a new coordinate system, so that </p>
<pre><code>TopLeft: [0, 0]
TopRight: [0, 1]
BottomLeft: [1, 0]
BottomRight: [1, 1]
</code></pre>
<p>So, now I want to detect objects inside this area, and transform their images coordinates to my [0,0] - [1,1] system.</p>
<p>I used these four corners as input for the cv::solvePnP function and then used cv::rodrigues to obtain the rotation matrix. In this current calibration, these are my results:</p>
<pre><code>translation vec:
[-0.6335161884885361;
0.0327718985712979;
2.090753021694066]
Rotation Mat:
[0.994295199236491, 0.1037446225804736, 0.02478124413546758;
-0.05350435092326501, 0.2841225297901329, 0.9572939321326209;
0.09227318791256028, -0.9531586653602274, 0.2880524560582033]
</code></pre>
<p>So, I would think that this would be enough for me to transform any image point inside that area and turn it into [0; 1] coordinates. To test, I tried to make the calculations for the image points that I already know (the corners). </p>
<p>If I try to use these matrices to transform the TopLeft corner [170, 243], I should be able to get [0, 0] result. But this is not what I am getting, so obviously I am missing something along the process, since I know that this task is possible to achieve. </p>
http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114693#post-id-114693I tried to transform this matrix system to a normal system of equations. That is equivalent to:
u = a*X + b*Y + b*Z + d; v = e*X + f*Y + g*Z + h; w = i*x + j*y + k*z + l. in which a,b,c,d... are the components of result of the dotProduct K.(R | t): a = fx * r11 + cx * r31 and so on.
w should always be 1, but for some reason I need to divide u and v by w so that I obtain correct imgPoints (maybe the problem is this?).
I tested this and get same results as matrix operations.
Then, I used a website to solve that system for X,Y,Z and I get [this](https://s13.postimg.org/ic14dtogn/image.png) .Fri, 25 Nov 2016 09:43:45 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114693#post-id-114693Comment by Eduardo for <p>Couldn't write this up in a comment, so I'll make it an answer instead even though this is a question.</p>
<p>I calibrated my camera and obtain the cameraMatrix and distortion coefficients. I believe they are correct because if I undistort my image it will look good.</p>
<p>I'm attempting to transform image coordinates to plane coordinates in the following setup: </p>
<p><img alt="image description" src="https://s18.postimg.org/4kuwnzkix/Setup.png"/></p>
<p>Each black square is one corner of my calibrated surface. I know the image coordinates of each of these squares, in my current calibration they are:</p>
<pre><code>TopLeft: [170, 243]
TopRight: [402, 238]
BottomLeft: [82, 383]
BottomRight: [513, 346]
</code></pre>
<p>So, now I want a new coordinate system, so that </p>
<pre><code>TopLeft: [0, 0]
TopRight: [0, 1]
BottomLeft: [1, 0]
BottomRight: [1, 1]
</code></pre>
<p>So, now I want to detect objects inside this area, and transform their images coordinates to my [0,0] - [1,1] system.</p>
<p>I used these four corners as input for the cv::solvePnP function and then used cv::rodrigues to obtain the rotation matrix. In this current calibration, these are my results:</p>
<pre><code>translation vec:
[-0.6335161884885361;
0.0327718985712979;
2.090753021694066]
Rotation Mat:
[0.994295199236491, 0.1037446225804736, 0.02478124413546758;
-0.05350435092326501, 0.2841225297901329, 0.9572939321326209;
0.09227318791256028, -0.9531586653602274, 0.2880524560582033]
</code></pre>
<p>So, I would think that this would be enough for me to transform any image point inside that area and turn it into [0; 1] coordinates. To test, I tried to make the calculations for the image points that I already know (the corners). </p>
<p>If I try to use these matrices to transform the TopLeft corner [170, 243], I should be able to get [0, 0] result. But this is not what I am getting, so obviously I am missing something along the process, since I know that this task is possible to achieve. </p>
http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114580#post-id-114580I was thinking about using the chessboard to:
- be sure that the pose returned by `cv::solvePnP` is quite accurate bytesting if the projection of the object points using the pose and the intrinsic is ok or not (compared to the chessboard image corners)
- validate the principle using the corners detected by `cv::findChessboardCorners` and check the accuracy with the true model points
If all these steps are validated, you should be able to determine where is the issue when you switch to your custom case (problem with the pose for instance, accuracy of the corners detected, limitation of the method, etc.).Thu, 24 Nov 2016 09:36:40 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114580#post-id-114580Comment by Eduardo for <p>Couldn't write this up in a comment, so I'll make it an answer instead even though this is a question.</p>
<p>I calibrated my camera and obtain the cameraMatrix and distortion coefficients. I believe they are correct because if I undistort my image it will look good.</p>
<p>I'm attempting to transform image coordinates to plane coordinates in the following setup: </p>
<p><img alt="image description" src="https://s18.postimg.org/4kuwnzkix/Setup.png"/></p>
<p>Each black square is one corner of my calibrated surface. I know the image coordinates of each of these squares, in my current calibration they are:</p>
<pre><code>TopLeft: [170, 243]
TopRight: [402, 238]
BottomLeft: [82, 383]
BottomRight: [513, 346]
</code></pre>
<p>So, now I want a new coordinate system, so that </p>
<pre><code>TopLeft: [0, 0]
TopRight: [0, 1]
BottomLeft: [1, 0]
BottomRight: [1, 1]
</code></pre>
<p>So, now I want to detect objects inside this area, and transform their images coordinates to my [0,0] - [1,1] system.</p>
<p>I used these four corners as input for the cv::solvePnP function and then used cv::rodrigues to obtain the rotation matrix. In this current calibration, these are my results:</p>
<pre><code>translation vec:
[-0.6335161884885361;
0.0327718985712979;
2.090753021694066]
Rotation Mat:
[0.994295199236491, 0.1037446225804736, 0.02478124413546758;
-0.05350435092326501, 0.2841225297901329, 0.9572939321326209;
0.09227318791256028, -0.9531586653602274, 0.2880524560582033]
</code></pre>
<p>So, I would think that this would be enough for me to transform any image point inside that area and turn it into [0; 1] coordinates. To test, I tried to make the calculations for the image points that I already know (the corners). </p>
<p>If I try to use these matrices to transform the TopLeft corner [170, 243], I should be able to get [0, 0] result. But this is not what I am getting, so obviously I am missing something along the process, since I know that this task is possible to achieve. </p>
http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114702#post-id-114702You cannot revert the equation:
- u, v are known
- K and (R|t) are known
- solve for X, Y, Z
Because it is an image ray that has an infinite number of solution along the image ray. That's why we use stereo system to reconstruct the depth.
But if you know that the image ray intersects with a plane (plane equation is known), you can get X, Y, Z. This should be basically what I wrote in my answer.Fri, 25 Nov 2016 11:15:30 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114702#post-id-114702Comment by Pedro Batista for <p>Couldn't write this up in a comment, so I'll make it an answer instead even though this is a question.</p>
<p>I calibrated my camera and obtain the cameraMatrix and distortion coefficients. I believe they are correct because if I undistort my image it will look good.</p>
<p>I'm attempting to transform image coordinates to plane coordinates in the following setup: </p>
<p><img alt="image description" src="https://s18.postimg.org/4kuwnzkix/Setup.png"/></p>
<p>Each black square is one corner of my calibrated surface. I know the image coordinates of each of these squares, in my current calibration they are:</p>
<pre><code>TopLeft: [170, 243]
TopRight: [402, 238]
BottomLeft: [82, 383]
BottomRight: [513, 346]
</code></pre>
<p>So, now I want a new coordinate system, so that </p>
<pre><code>TopLeft: [0, 0]
TopRight: [0, 1]
BottomLeft: [1, 0]
BottomRight: [1, 1]
</code></pre>
<p>So, now I want to detect objects inside this area, and transform their images coordinates to my [0,0] - [1,1] system.</p>
<p>I used these four corners as input for the cv::solvePnP function and then used cv::rodrigues to obtain the rotation matrix. In this current calibration, these are my results:</p>
<pre><code>translation vec:
[-0.6335161884885361;
0.0327718985712979;
2.090753021694066]
Rotation Mat:
[0.994295199236491, 0.1037446225804736, 0.02478124413546758;
-0.05350435092326501, 0.2841225297901329, 0.9572939321326209;
0.09227318791256028, -0.9531586653602274, 0.2880524560582033]
</code></pre>
<p>So, I would think that this would be enough for me to transform any image point inside that area and turn it into [0; 1] coordinates. To test, I tried to make the calculations for the image points that I already know (the corners). </p>
<p>If I try to use these matrices to transform the TopLeft corner [170, 243], I should be able to get [0, 0] result. But this is not what I am getting, so obviously I am missing something along the process, since I know that this task is possible to achieve. </p>
http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114569#post-id-114569Also, I tested error for transformation matrix calculated with 4 points and 13 points, and the accuracy does not improve that much by adding more points.Thu, 24 Nov 2016 08:29:35 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114569#post-id-114569Comment by Pedro Batista for <p>Couldn't write this up in a comment, so I'll make it an answer instead even though this is a question.</p>
<p>I calibrated my camera and obtain the cameraMatrix and distortion coefficients. I believe they are correct because if I undistort my image it will look good.</p>
<p>I'm attempting to transform image coordinates to plane coordinates in the following setup: </p>
<p><img alt="image description" src="https://s18.postimg.org/4kuwnzkix/Setup.png"/></p>
<p>Each black square is one corner of my calibrated surface. I know the image coordinates of each of these squares, in my current calibration they are:</p>
<pre><code>TopLeft: [170, 243]
TopRight: [402, 238]
BottomLeft: [82, 383]
BottomRight: [513, 346]
</code></pre>
<p>So, now I want a new coordinate system, so that </p>
<pre><code>TopLeft: [0, 0]
TopRight: [0, 1]
BottomLeft: [1, 0]
BottomRight: [1, 1]
</code></pre>
<p>So, now I want to detect objects inside this area, and transform their images coordinates to my [0,0] - [1,1] system.</p>
<p>I used these four corners as input for the cv::solvePnP function and then used cv::rodrigues to obtain the rotation matrix. In this current calibration, these are my results:</p>
<pre><code>translation vec:
[-0.6335161884885361;
0.0327718985712979;
2.090753021694066]
Rotation Mat:
[0.994295199236491, 0.1037446225804736, 0.02478124413546758;
-0.05350435092326501, 0.2841225297901329, 0.9572939321326209;
0.09227318791256028, -0.9531586653602274, 0.2880524560582033]
</code></pre>
<p>So, I would think that this would be enough for me to transform any image point inside that area and turn it into [0; 1] coordinates. To test, I tried to make the calculations for the image points that I already know (the corners). </p>
<p>If I try to use these matrices to transform the TopLeft corner [170, 243], I should be able to get [0, 0] result. But this is not what I am getting, so obviously I am missing something along the process, since I know that this task is possible to achieve. </p>
http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114690#post-id-114690Can you help me inverting the [u, v, 1]^T = K . (R | t) . [X, Y, Z, 1]^T to solve for [X,Y,Z]? The problem is that (R|t) is not invertible so I'm not sure how to proceed, and algebra class was way to long ago :PFri, 25 Nov 2016 09:15:12 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114690#post-id-114690Comment by Pedro Batista for <p>Couldn't write this up in a comment, so I'll make it an answer instead even though this is a question.</p>
<p>I calibrated my camera and obtain the cameraMatrix and distortion coefficients. I believe they are correct because if I undistort my image it will look good.</p>
<p>I'm attempting to transform image coordinates to plane coordinates in the following setup: </p>
<p><img alt="image description" src="https://s18.postimg.org/4kuwnzkix/Setup.png"/></p>
<p>Each black square is one corner of my calibrated surface. I know the image coordinates of each of these squares, in my current calibration they are:</p>
<pre><code>TopLeft: [170, 243]
TopRight: [402, 238]
BottomLeft: [82, 383]
BottomRight: [513, 346]
</code></pre>
<p>So, now I want a new coordinate system, so that </p>
<pre><code>TopLeft: [0, 0]
TopRight: [0, 1]
BottomLeft: [1, 0]
BottomRight: [1, 1]
</code></pre>
<p>So, now I want to detect objects inside this area, and transform their images coordinates to my [0,0] - [1,1] system.</p>
<p>I used these four corners as input for the cv::solvePnP function and then used cv::rodrigues to obtain the rotation matrix. In this current calibration, these are my results:</p>
<pre><code>translation vec:
[-0.6335161884885361;
0.0327718985712979;
2.090753021694066]
Rotation Mat:
[0.994295199236491, 0.1037446225804736, 0.02478124413546758;
-0.05350435092326501, 0.2841225297901329, 0.9572939321326209;
0.09227318791256028, -0.9531586653602274, 0.2880524560582033]
</code></pre>
<p>So, I would think that this would be enough for me to transform any image point inside that area and turn it into [0; 1] coordinates. To test, I tried to make the calculations for the image points that I already know (the corners). </p>
<p>If I try to use these matrices to transform the TopLeft corner [170, 243], I should be able to get [0, 0] result. But this is not what I am getting, so obviously I am missing something along the process, since I know that this task is possible to achieve. </p>
http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114567#post-id-114567While doing test calculations using the formula in your comment, I would input plane coordinates for known image points. I realized that the resulting vector never had the z value = 1. It is always something like [400, 500, 1,7].
It felt like normalizing this result by dividing the vector by the z Value would be correct, and it turns out that the output started to make sense. I calculated image coordinates for known points, and it the maximum error I am getting is around 20 pixels (error grows with distance to the camera). My application needs good accuracy, but I may be able to do some post-processing on these results to fix the resulting position.Thu, 24 Nov 2016 08:27:02 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114567#post-id-114567Comment by Pedro Batista for <p>Couldn't write this up in a comment, so I'll make it an answer instead even though this is a question.</p>
<p>I calibrated my camera and obtain the cameraMatrix and distortion coefficients. I believe they are correct because if I undistort my image it will look good.</p>
<p>I'm attempting to transform image coordinates to plane coordinates in the following setup: </p>
<p><img alt="image description" src="https://s18.postimg.org/4kuwnzkix/Setup.png"/></p>
<p>Each black square is one corner of my calibrated surface. I know the image coordinates of each of these squares, in my current calibration they are:</p>
<pre><code>TopLeft: [170, 243]
TopRight: [402, 238]
BottomLeft: [82, 383]
BottomRight: [513, 346]
</code></pre>
<p>So, now I want a new coordinate system, so that </p>
<pre><code>TopLeft: [0, 0]
TopRight: [0, 1]
BottomLeft: [1, 0]
BottomRight: [1, 1]
</code></pre>
<p>So, now I want to detect objects inside this area, and transform their images coordinates to my [0,0] - [1,1] system.</p>
<p>I used these four corners as input for the cv::solvePnP function and then used cv::rodrigues to obtain the rotation matrix. In this current calibration, these are my results:</p>
<pre><code>translation vec:
[-0.6335161884885361;
0.0327718985712979;
2.090753021694066]
Rotation Mat:
[0.994295199236491, 0.1037446225804736, 0.02478124413546758;
-0.05350435092326501, 0.2841225297901329, 0.9572939321326209;
0.09227318791256028, -0.9531586653602274, 0.2880524560582033]
</code></pre>
<p>So, I would think that this would be enough for me to transform any image point inside that area and turn it into [0; 1] coordinates. To test, I tried to make the calculations for the image points that I already know (the corners). </p>
<p>If I try to use these matrices to transform the TopLeft corner [170, 243], I should be able to get [0, 0] result. But this is not what I am getting, so obviously I am missing something along the process, since I know that this task is possible to achieve. </p>
http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114409#post-id-114409The rotation matrix doesn't look too good... those 0.99 values do not seem right. If I use I take an image coordinate and run it through that matrix it doesn't look like it would turn into a 0;1 coordinateWed, 23 Nov 2016 12:00:58 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114409#post-id-114409Comment by Pedro Batista for <p>Couldn't write this up in a comment, so I'll make it an answer instead even though this is a question.</p>
<p>I calibrated my camera and obtain the cameraMatrix and distortion coefficients. I believe they are correct because if I undistort my image it will look good.</p>
<p>I'm attempting to transform image coordinates to plane coordinates in the following setup: </p>
<p><img alt="image description" src="https://s18.postimg.org/4kuwnzkix/Setup.png"/></p>
<p>Each black square is one corner of my calibrated surface. I know the image coordinates of each of these squares, in my current calibration they are:</p>
<pre><code>TopLeft: [170, 243]
TopRight: [402, 238]
BottomLeft: [82, 383]
BottomRight: [513, 346]
</code></pre>
<p>So, now I want a new coordinate system, so that </p>
<pre><code>TopLeft: [0, 0]
TopRight: [0, 1]
BottomLeft: [1, 0]
BottomRight: [1, 1]
</code></pre>
<p>So, now I want to detect objects inside this area, and transform their images coordinates to my [0,0] - [1,1] system.</p>
<p>I used these four corners as input for the cv::solvePnP function and then used cv::rodrigues to obtain the rotation matrix. In this current calibration, these are my results:</p>
<pre><code>translation vec:
[-0.6335161884885361;
0.0327718985712979;
2.090753021694066]
Rotation Mat:
[0.994295199236491, 0.1037446225804736, 0.02478124413546758;
-0.05350435092326501, 0.2841225297901329, 0.9572939321326209;
0.09227318791256028, -0.9531586653602274, 0.2880524560582033]
</code></pre>
<p>So, I would think that this would be enough for me to transform any image point inside that area and turn it into [0; 1] coordinates. To test, I tried to make the calculations for the image points that I already know (the corners). </p>
<p>If I try to use these matrices to transform the TopLeft corner [170, 243], I should be able to get [0, 0] result. But this is not what I am getting, so obviously I am missing something along the process, since I know that this task is possible to achieve. </p>
http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114694#post-id-114694This should work both ways, but the results are not the same. For exampe. If I input X = 0.25, Y = 0.25 and get u=300, v=200, if then I input u=200 and v=300 to get X,Y,Z in the inverted system, I should obtain X=0.25, Y=0.25 .. but I am getting waaay different results..
This is basic math so I must be screwing up somewhere, but I went through everything 4 or 5 times and I can't find the problem.Fri, 25 Nov 2016 09:46:15 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114694#post-id-114694Comment by Eduardo for <p>Couldn't write this up in a comment, so I'll make it an answer instead even though this is a question.</p>
<p>I calibrated my camera and obtain the cameraMatrix and distortion coefficients. I believe they are correct because if I undistort my image it will look good.</p>
<p>I'm attempting to transform image coordinates to plane coordinates in the following setup: </p>
<p><img alt="image description" src="https://s18.postimg.org/4kuwnzkix/Setup.png"/></p>
<p>Each black square is one corner of my calibrated surface. I know the image coordinates of each of these squares, in my current calibration they are:</p>
<pre><code>TopLeft: [170, 243]
TopRight: [402, 238]
BottomLeft: [82, 383]
BottomRight: [513, 346]
</code></pre>
<p>So, now I want a new coordinate system, so that </p>
<pre><code>TopLeft: [0, 0]
TopRight: [0, 1]
BottomLeft: [1, 0]
BottomRight: [1, 1]
</code></pre>
<p>So, now I want to detect objects inside this area, and transform their images coordinates to my [0,0] - [1,1] system.</p>
<p>I used these four corners as input for the cv::solvePnP function and then used cv::rodrigues to obtain the rotation matrix. In this current calibration, these are my results:</p>
<pre><code>translation vec:
[-0.6335161884885361;
0.0327718985712979;
2.090753021694066]
Rotation Mat:
[0.994295199236491, 0.1037446225804736, 0.02478124413546758;
-0.05350435092326501, 0.2841225297901329, 0.9572939321326209;
0.09227318791256028, -0.9531586653602274, 0.2880524560582033]
</code></pre>
<p>So, I would think that this would be enough for me to transform any image point inside that area and turn it into [0; 1] coordinates. To test, I tried to make the calculations for the image points that I already know (the corners). </p>
<p>If I try to use these matrices to transform the TopLeft corner [170, 243], I should be able to get [0, 0] result. But this is not what I am getting, so obviously I am missing something along the process, since I know that this task is possible to achieve. </p>
http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114920#post-id-114920Forgot to mention that the 3D point obtained is in the camera frame. You can easily transform the point to the object frame as you know the camera pose.
See my edited answer for a working example.Sun, 27 Nov 2016 17:30:51 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114920#post-id-114920Comment by Pedro Batista for <p>Couldn't write this up in a comment, so I'll make it an answer instead even though this is a question.</p>
<p>I calibrated my camera and obtain the cameraMatrix and distortion coefficients. I believe they are correct because if I undistort my image it will look good.</p>
<p>I'm attempting to transform image coordinates to plane coordinates in the following setup: </p>
<p><img alt="image description" src="https://s18.postimg.org/4kuwnzkix/Setup.png"/></p>
<p>Each black square is one corner of my calibrated surface. I know the image coordinates of each of these squares, in my current calibration they are:</p>
<pre><code>TopLeft: [170, 243]
TopRight: [402, 238]
BottomLeft: [82, 383]
BottomRight: [513, 346]
</code></pre>
<p>So, now I want a new coordinate system, so that </p>
<pre><code>TopLeft: [0, 0]
TopRight: [0, 1]
BottomLeft: [1, 0]
BottomRight: [1, 1]
</code></pre>
<p>So, now I want to detect objects inside this area, and transform their images coordinates to my [0,0] - [1,1] system.</p>
<p>I used these four corners as input for the cv::solvePnP function and then used cv::rodrigues to obtain the rotation matrix. In this current calibration, these are my results:</p>
<pre><code>translation vec:
[-0.6335161884885361;
0.0327718985712979;
2.090753021694066]
Rotation Mat:
[0.994295199236491, 0.1037446225804736, 0.02478124413546758;
-0.05350435092326501, 0.2841225297901329, 0.9572939321326209;
0.09227318791256028, -0.9531586653602274, 0.2880524560582033]
</code></pre>
<p>So, I would think that this would be enough for me to transform any image point inside that area and turn it into [0; 1] coordinates. To test, I tried to make the calculations for the image points that I already know (the corners). </p>
<p>If I try to use these matrices to transform the TopLeft corner [170, 243], I should be able to get [0, 0] result. But this is not what I am getting, so obviously I am missing something along the process, since I know that this task is possible to achieve. </p>
http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=115057#post-id-115057Thanks for your effort in helping me, I will take a look at it now :)Mon, 28 Nov 2016 04:36:49 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=115057#post-id-115057Comment by ibrahim_5403 for <p>Couldn't write this up in a comment, so I'll make it an answer instead even though this is a question.</p>
<p>I calibrated my camera and obtain the cameraMatrix and distortion coefficients. I believe they are correct because if I undistort my image it will look good.</p>
<p>I'm attempting to transform image coordinates to plane coordinates in the following setup: </p>
<p><img alt="image description" src="https://s18.postimg.org/4kuwnzkix/Setup.png"/></p>
<p>Each black square is one corner of my calibrated surface. I know the image coordinates of each of these squares, in my current calibration they are:</p>
<pre><code>TopLeft: [170, 243]
TopRight: [402, 238]
BottomLeft: [82, 383]
BottomRight: [513, 346]
</code></pre>
<p>So, now I want a new coordinate system, so that </p>
<pre><code>TopLeft: [0, 0]
TopRight: [0, 1]
BottomLeft: [1, 0]
BottomRight: [1, 1]
</code></pre>
<p>So, now I want to detect objects inside this area, and transform their images coordinates to my [0,0] - [1,1] system.</p>
<p>I used these four corners as input for the cv::solvePnP function and then used cv::rodrigues to obtain the rotation matrix. In this current calibration, these are my results:</p>
<pre><code>translation vec:
[-0.6335161884885361;
0.0327718985712979;
2.090753021694066]
Rotation Mat:
[0.994295199236491, 0.1037446225804736, 0.02478124413546758;
-0.05350435092326501, 0.2841225297901329, 0.9572939321326209;
0.09227318791256028, -0.9531586653602274, 0.2880524560582033]
</code></pre>
<p>So, I would think that this would be enough for me to transform any image point inside that area and turn it into [0; 1] coordinates. To test, I tried to make the calculations for the image points that I already know (the corners). </p>
<p>If I try to use these matrices to transform the TopLeft corner [170, 243], I should be able to get [0, 0] result. But this is not what I am getting, so obviously I am missing something along the process, since I know that this task is possible to achieve. </p>
http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=148418#post-id-148418What is the chessboard model points (model coordinates)Sun, 14 May 2017 10:02:19 -0500http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=148418#post-id-148418Comment by Pedro Batista for <p>Couldn't write this up in a comment, so I'll make it an answer instead even though this is a question.</p>
<p>I calibrated my camera and obtain the cameraMatrix and distortion coefficients. I believe they are correct because if I undistort my image it will look good.</p>
<p>I'm attempting to transform image coordinates to plane coordinates in the following setup: </p>
<p><img alt="image description" src="https://s18.postimg.org/4kuwnzkix/Setup.png"/></p>
<p>Each black square is one corner of my calibrated surface. I know the image coordinates of each of these squares, in my current calibration they are:</p>
<pre><code>TopLeft: [170, 243]
TopRight: [402, 238]
BottomLeft: [82, 383]
BottomRight: [513, 346]
</code></pre>
<p>So, now I want a new coordinate system, so that </p>
<pre><code>TopLeft: [0, 0]
TopRight: [0, 1]
BottomLeft: [1, 0]
BottomRight: [1, 1]
</code></pre>
<p>So, now I want to detect objects inside this area, and transform their images coordinates to my [0,0] - [1,1] system.</p>
<p>I used these four corners as input for the cv::solvePnP function and then used cv::rodrigues to obtain the rotation matrix. In this current calibration, these are my results:</p>
<pre><code>translation vec:
[-0.6335161884885361;
0.0327718985712979;
2.090753021694066]
Rotation Mat:
[0.994295199236491, 0.1037446225804736, 0.02478124413546758;
-0.05350435092326501, 0.2841225297901329, 0.9572939321326209;
0.09227318791256028, -0.9531586653602274, 0.2880524560582033]
</code></pre>
<p>So, I would think that this would be enough for me to transform any image point inside that area and turn it into [0; 1] coordinates. To test, I tried to make the calculations for the image points that I already know (the corners). </p>
<p>If I try to use these matrices to transform the TopLeft corner [170, 243], I should be able to get [0, 0] result. But this is not what I am getting, so obviously I am missing something along the process, since I know that this task is possible to achieve. </p>
http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114532#post-id-114532Ok, I will try to add more known points to the SolvePNP function and try again.Thu, 24 Nov 2016 05:16:23 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114532#post-id-114532Comment by Eduardo for <p>Couldn't write this up in a comment, so I'll make it an answer instead even though this is a question.</p>
<p>I calibrated my camera and obtain the cameraMatrix and distortion coefficients. I believe they are correct because if I undistort my image it will look good.</p>
<p>I'm attempting to transform image coordinates to plane coordinates in the following setup: </p>
<p><img alt="image description" src="https://s18.postimg.org/4kuwnzkix/Setup.png"/></p>
<p>Each black square is one corner of my calibrated surface. I know the image coordinates of each of these squares, in my current calibration they are:</p>
<pre><code>TopLeft: [170, 243]
TopRight: [402, 238]
BottomLeft: [82, 383]
BottomRight: [513, 346]
</code></pre>
<p>So, now I want a new coordinate system, so that </p>
<pre><code>TopLeft: [0, 0]
TopRight: [0, 1]
BottomLeft: [1, 0]
BottomRight: [1, 1]
</code></pre>
<p>So, now I want to detect objects inside this area, and transform their images coordinates to my [0,0] - [1,1] system.</p>
<p>I used these four corners as input for the cv::solvePnP function and then used cv::rodrigues to obtain the rotation matrix. In this current calibration, these are my results:</p>
<pre><code>translation vec:
[-0.6335161884885361;
0.0327718985712979;
2.090753021694066]
Rotation Mat:
[0.994295199236491, 0.1037446225804736, 0.02478124413546758;
-0.05350435092326501, 0.2841225297901329, 0.9572939321326209;
0.09227318791256028, -0.9531586653602274, 0.2880524560582033]
</code></pre>
<p>So, I would think that this would be enough for me to transform any image point inside that area and turn it into [0; 1] coordinates. To test, I tried to make the calculations for the image points that I already know (the corners). </p>
<p>If I try to use these matrices to transform the TopLeft corner [170, 243], I should be able to get [0, 0] result. But this is not what I am getting, so obviously I am missing something along the process, since I know that this task is possible to achieve. </p>
http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114571#post-id-114571Maybe a stupid question: your pattern is perfectly square (to be able to get [0,0] [1,1] coordinates)?
Also, I would validate the process with a chessboard pattern just to be sure.Thu, 24 Nov 2016 08:53:46 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114571#post-id-114571Answer by Eduardo for <p>I'm new to everything vision!</p>
<p>I've read some similar posts but I can't get my head around it. Here's what I'm trying to do:</p>
<p>-I have a single camera looking down on a surface (we can assume it's normal to the surface) and I want to detect a hole and obtain it's [X,Y] from a world coordinate I have determined.
-I do not care about the -Z info, I know the real hole diameter.</p>
<p>I have calibrated my camera and have the intrinsic matrix. For the extrinsic info I have taken 4 points from my image(mouse callback) and their correspondent 4-real points from the world coordinate to use 'solvePnP' and 'Rodrigues'(for the rotation matrix).</p>
<p>I can detect the hole using HoughCircles and I have the center in [u,v] coords.</p>
<p>Now, is there a way to obtain that [u,v] point in [X,Y] in my defined coordinate system???</p>
http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?answer=67039#post-id-67039It should be possible if you know:
- the camera intrinsic parameters: ![camera intrinsic parameters](/upfiles/14378614365891025.png)
- the camera pose: ![camera pose](/upfiles/14378617706501256.png)
- the plane equation that contains the hole: ![plane equation](/upfiles/1437864225927213.png)
For the plane equation, you should find a way to know the coordinates of 3 points in the world coordinate that lie on the same plane that contains the hole.
Then, you can change their coordinates to the camera frame knowing the camera pose and compute the [plane equation](https://en.wikipedia.org/wiki/Plane_%28geometry%29#Describing_a_plane_through_three_points).
## Steps ##
Convert the 2D image coordinate of the hole in the normalized camera frame:
![image description](/upfiles/14378647785864535.png)
Get the scale factor: ![image description](/upfiles/14378654586431079.png)
The 3D coordinate of the hole in the camera frame is then: ![image description](/upfiles/14378656146651749.png)
----------
## Plane Equation ##
Different formulas:
- ![plane equation](/upfiles/1437864225927213.png)
- ![plane equation](/upfiles/143786366794169.png)
- ![plane equation](/upfiles/14380164155010323.png)
- ![plane equation](/upfiles/14378638339238191.png)
You can then identify the quadruplet ![image description](/upfiles/14378639881327576.png) with the quadruplet ![image description](/upfiles/14378643029500772.png).
We can have a first plane that contains the hole and another plane parallel to the first but which passes by a point at a normalized coordinate z=1 (obtained from the 2d coordinate): ![image description](/upfiles/14378647785864535.png).
For the two plane equations, the coefficients a, b, c are the same, only the coefficient d is different.
The "scale factor" is then:
![image description](/upfiles/14378654586431079.png)
Edit:
Your case is a little easier as the camera is almost parallel to the surface. You could use the hole diameter to compute the "scale factor" knowing the ratio between the hole diameter in the real world and the hole diameter in pixel after a calibration step.
----------
**Edit2 (2016/11/27):**
Here a full working example. The data used to estimate the camera intrinsic matrix can be found in the [OpenCV sample data directory](https://github.com/opencv/opencv/tree/3.1.0/samples/data) (I used the left images). Should also be possible to do the same by computing the point from the intersection between the image ray and the plane.
The example code is a little bit long. What it does:
- extract 2D image corners using `cv::findChessboardCorners` (image used is [left04.jpg](https://github.com/opencv/opencv/blob/3.1.0/samples/data/left04.jpg))
- compute the camera pose using `cv::solvePnP`
- check the camera pose by computing the RMS reprojection error
- compute the plane equation from 3 points
- compute the 3D point in camera and object frame using the 2D image coordinate, the plane equation and the camera pose and compute the RMS error
- note: here the distorsion coefficients are not taken into account when computing the normalized camera coordinates
Code:
#include <iostream>
#include <opencv2/opencv.hpp>
//@ref: http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/
enum Pattern { CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
static void calcChessboardCorners(cv::Size boardSize, float squareSize, std::vector<cv::Point3f>& corners, Pattern patternType = CHESSBOARD) {
corners.resize(0);
switch(patternType) {
case CHESSBOARD:
case CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float(j*squareSize),
float(i*squareSize), 0));
break;
case ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float((2*j + i % 2)*squareSize),
float(i*squareSize), 0));
break;
default:
CV_Error(cv::Error::StsBadArg, "Unknown pattern type\n");
}
}
double checkCameraPose(const std::vector<cv::Point3f> &modelPts, const std::vector<cv::Point2f> &imagePts, const cv::Mat &cameraMatrix,
const cv::Mat &distCoeffs, const cv::Mat &rvec, const cv::Mat &tvec) {
std::vector<cv::Point2f> projectedPts;
cv::projectPoints(modelPts, rvec, tvec, cameraMatrix, distCoeffs, projectedPts);
double rms = 0.0;
for (size_t i = 0; i < projectedPts.size(); i++) {
rms += (projectedPts[i].x-imagePts[i].x)*(projectedPts[i].x-imagePts[i].x) + (projectedPts[i].y-imagePts[i].y)*(projectedPts[i].y-imagePts[i].y);
}
return sqrt(rms / projectedPts.size());
}
cv::Point3f transformPoint(const cv::Point3f &pt, const cv::Mat &rvec, const cv::Mat &tvec) {
//Compute res = (R | T) . pt
cv::Mat rotationMatrix;
cv::Rodrigues(rvec, rotationMatrix);
cv::Mat transformationMatrix = (cv::Mat_<double>(4, 4) << rotationMatrix.at<double>(0,0), rotationMatrix.at<double>(0,1), rotationMatrix.at<double>(0,2), tvec.at<double>(0),
rotationMatrix.at<double>(1,0), rotationMatrix.at<double>(1,1), rotationMatrix.at<double>(1,2), tvec.at<double>(1),
rotationMatrix.at<double>(2,0), rotationMatrix.at<double>(2,1), rotationMatrix.at<double>(2,2), tvec.at<double>(2),
0, 0, 0, 1);
cv::Mat homogeneousPt = (cv::Mat_<double>(4, 1) << pt.x, pt.y, pt.z, 1.0);
cv::Mat transformedPtMat = transformationMatrix * homogeneousPt;
cv::Point3f transformedPt(transformedPtMat.at<double>(0), transformedPtMat.at<double>(1), transformedPtMat.at<double>(2));
return transformedPt;
}
cv::Point3f transformPointInverse(const cv::Point3f &pt, const cv::Mat &rvec, const cv::Mat &tvec) {
//Compute res = (R^t | -R^t . T) . pt
cv::Mat rotationMatrix;
cv::Rodrigues(rvec, rotationMatrix);
rotationMatrix = rotationMatrix.t();
cv::Mat translation = -rotationMatrix*tvec;
cv::Mat transformationMatrix = (cv::Mat_<double>(4, 4) << rotationMatrix.at<double>(0,0), rotationMatrix.at<double>(0,1), rotationMatrix.at<double>(0,2), translation.at<double>(0),
rotationMatrix.at<double>(1,0), rotationMatrix.at<double>(1,1), rotationMatrix.at<double>(1,2), translation.at<double>(1),
rotationMatrix.at<double>(2,0), rotationMatrix.at<double>(2,1), rotationMatrix.at<double>(2,2), translation.at<double>(2),
0, 0, 0, 1);
cv::Mat homogeneousPt = (cv::Mat_<double>(4, 1) << pt.x, pt.y, pt.z, 1.0);
cv::Mat transformedPtMat = transformationMatrix * homogeneousPt;
cv::Point3f transformedPt(transformedPtMat.at<double>(0), transformedPtMat.at<double>(1), transformedPtMat.at<double>(2));
return transformedPt;
}
void computePlaneEquation(const cv::Point3f &p0, const cv::Point3f &p1, const cv::Point3f &p2, float &a, float &b, float &c, float &d) {
//Vector p0_p1
cv::Point3f p0_p1;
p0_p1.x = p0.x - p1.x;
p0_p1.y = p0.y - p1.y;
p0_p1.z = p0.z - p1.z;
//Vector p0_p2
cv::Point3f p0_p2;
p0_p2.x = p0.x - p2.x;
p0_p2.y = p0.y - p2.y;
p0_p2.z = p0.z - p2.z;
//Normal vector
cv::Point3f n = p0_p1.cross(p0_p2);
a = n.x;
b = n.y;
c = n.z;
d = -(a*p0.x + b*p0.y + c*p0.z);
float norm = sqrt(a*a + b*b + c*c);
a /= norm;
b /= norm;
c /= norm;
d /= norm;
}
cv::Point3f compute3DOnPlaneFrom2D(const cv::Point2f &imagePt, const cv::Mat &cameraMatrix, const float a, const float b, const float c, const float d) {
double fx = cameraMatrix.at<double>(0,0);
double fy = cameraMatrix.at<double>(1,1);
double cx = cameraMatrix.at<double>(0,2);
double cy = cameraMatrix.at<double>(1,2);
cv::Point2f normalizedImagePt;
normalizedImagePt.x = (imagePt.x - cx) / fx;
normalizedImagePt.y = (imagePt.y - cy) / fy;
float s = -d / (a*normalizedImagePt.x + b*normalizedImagePt.y + c);
cv::Point3f pt;
pt.x = s*normalizedImagePt.x;
pt.y = s*normalizedImagePt.y;
pt.z = s;
return pt;
}
int main() {
cv::Mat img = cv::imread("data/left04.jpg", cv::IMREAD_GRAYSCALE);
cv::Mat view;
cv::cvtColor(img, view, cv::COLOR_GRAY2BGR);
cv::Size boardSize(9, 6);
std::vector<cv::Point2f> pointbuf;
bool found = cv::findChessboardCorners( img, boardSize, pointbuf, cv::CALIB_CB_ADAPTIVE_THRESH | cv::CALIB_CB_FAST_CHECK | cv::CALIB_CB_NORMALIZE_IMAGE);
if(found) {
cv::drawChessboardCorners( view, boardSize, cv::Mat(pointbuf), found );
} else {
return -1;
}
cv::imshow("Image", view);
cv::waitKey();
//Compute chessboard model points
std::vector<cv::Point3f> modelPts;
float squareSize = 1.0f;
calcChessboardCorners(boardSize, squareSize, modelPts);
//Intrinsic
cv::Mat cameraMatrix = (cv::Mat_<double>(3, 3) << 5.3590117051349637e+02, 0, 3.4227429926016583e+02,
0, 5.3590117051349637e+02, 2.3557560607943688e+02,
0, 0, 1);
cv::Mat distCoeffs = (cv::Mat_<double>(5, 1) << -2.6643160989580222e-01, -3.8505305722612772e-02, 1.7844280073183410e-03,
-2.7702634246810361e-04, 2.3850218962079497e-01);
//Compute camera pose
cv::Mat rvec, tvec;
cv::solvePnP(modelPts, pointbuf, cameraMatrix, distCoeffs, rvec, tvec);
//Check camera pose
double rms = checkCameraPose(modelPts, pointbuf, cameraMatrix, distCoeffs, rvec, tvec);
std::cout << "RMS error for camera pose=" << rms << std::endl;
//Transform model point (in object frame) to the camera frame
cv::Point3f pt0 = transformPoint(modelPts[0], rvec, tvec);
cv::Point3f pt1 = transformPoint(modelPts[8], rvec, tvec);
cv::Point3f pt2 = transformPoint(modelPts[53], rvec, tvec);
//Compute plane equation in the camera frame
float a, b, c, d;
computePlaneEquation(pt0, pt1, pt2, a, b, c, d);
std::cout << "Plane equation=" << a << " ; " << b << " ; " << c << " ; " << d << std::endl;
//Compute 3D from 2D
std::vector<cv::Point3f> pts3dCameraFrame, pts3dObjectFrame;
double rms_3D = 0.0;
for (size_t i = 0; i < pointbuf.size(); i++) {
cv::Point3f pt = compute3DOnPlaneFrom2D(pointbuf[i], cameraMatrix, a, b, c, d);
pts3dCameraFrame.push_back(pt);
cv::Point3f ptObjectFrame = transformPointInverse(pt, rvec, tvec);
pts3dObjectFrame.push_back(ptObjectFrame);
rms_3D += (modelPts[i].x-ptObjectFrame.x)*(modelPts[i].x-ptObjectFrame.x) + (modelPts[i].y-ptObjectFrame.y)*(modelPts[i].y-ptObjectFrame.y) +
(modelPts[i].z-ptObjectFrame.z)*(modelPts[i].z-ptObjectFrame.z);
std::cout << "modelPts[" << i << "]=" << modelPts[i] << " ; calc=" << ptObjectFrame << std::endl;
}
std::cout << "RMS error for model points=" << sqrt(rms_3D / pointbuf.size()) << std::endl;
return 0;
}
Sat, 25 Jul 2015 18:10:32 -0500http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?answer=67039#post-id-67039Comment by Pedro Batista for <div class="snippet"><p>It should be possible if you know:</p>
<ul>
<li>the camera intrinsic parameters: <img alt="camera intrinsic parameters" src="/upfiles/14378614365891025.png"/></li>
<li>the camera pose: <img alt="camera pose" src="/upfiles/14378617706501256.png"/></li>
<li>the plane equation that contains the hole: <img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
</ul>
<p>For the plane equation, you should find a way to know the coordinates of 3 points in the world coordinate that lie on the same plane that contains the hole. </p>
<p>Then, you can change their coordinates to the camera frame knowing the camera pose and compute the <a href="https://en.wikipedia.org/wiki/Plane_%28geometry%29#Describing_a_plane_through_three_points">plane equation</a>.</p>
<h2>Steps</h2>
<p>Convert the 2D image coordinate of the hole in the normalized camera frame:
<img alt="image description" src="/upfiles/14378647785864535.png"/></p>
<p>Get the scale factor: <img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>The 3D coordinate of the hole in the camera frame is then: <img alt="image description" src="/upfiles/14378656146651749.png"/></p>
<hr/>
<h2>Plane Equation</h2>
<p>Different formulas:</p>
<ul>
<li><img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
<li><img alt="plane equation" src="/upfiles/143786366794169.png"/></li>
<li><img alt="plane equation" src="/upfiles/14380164155010323.png"/></li>
<li><img alt="plane equation" src="/upfiles/14378638339238191.png"/></li>
</ul>
<p>You can then identify the quadruplet <img alt="image description" src="/upfiles/14378639881327576.png"/> with the quadruplet <img alt="image description" src="/upfiles/14378643029500772.png"/>.</p>
<p>We can have a first plane that contains the hole and another plane parallel to the first but which passes by a point at a normalized coordinate z=1 (obtained from the 2d coordinate): <img alt="image description" src="/upfiles/14378647785864535.png"/>.</p>
<p>For the two plane equations, the coefficients a, b, c are the same, only the coefficient d is different.</p>
<p>The "scale factor" is then:
<img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>Edit:
Your case is a little easier as the camera is almost parallel to the surface. You could use the hole diameter to compute the "scale factor" knowing the ratio between the hole diameter in the real world and the hole diameter in pixel after a calibration step. </p>
<hr/>
<p><strong>Edit2 (2016/11/27):</strong></p>
<p>Here a full working example. The data used to estimate the camera intrinsic matrix can be found in the <a href="https://github.com/opencv/opencv/tree/3.1.0/samples/data">OpenCV sample data directory</a> (I used the left images). Should also be possible to do the same by computing the point from the intersection between the image ray and the plane.</p>
<p>The example code is a little bit long. What it does:</p>
<ul>
<li>extract 2D image corners using <code>cv::findChessboardCorners</code> (image used is <a href="https://github.com/opencv/opencv/blob/3.1.0/samples/data/left04.jpg">left04.jpg</a>)</li>
<li>compute the camera pose using <code>cv::solvePnP</code></li>
<li>check the camera pose by computing the RMS reprojection error</li>
<li>compute the plane equation from 3 points</li>
<li>compute the 3D point in camera and object frame using the 2D image coordinate, the plane equation and the camera pose and compute the RMS error</li>
<li>note: here the distorsion coefficients are not taken into account when computing the normalized camera coordinates</li>
</ul>
<p>Code:</p>
<pre><code>#include <iostream>
#include <opencv2/opencv.hpp>
//@ref: http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/
enum Pattern { CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
static void calcChessboardCorners(cv::Size boardSize, float squareSize, std::vector<cv::Point3f>& corners, Pattern patternType = CHESSBOARD) {
corners.resize(0);
switch(patternType) {
case CHESSBOARD:
case CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float(j*squareSize),
float(i*squareSize), 0));
break;
case ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float((2*j + i % 2)*squareSize),
float(i*squareSize), 0));
break;
default:
CV_Error(cv::Error::StsBadArg, "Unknown pattern type\n");
}
}
double checkCameraPose(const std::vector<cv::Point3f> &modelPts, const std::vector<cv::Point2f> &imagePts, const cv::Mat &cameraMatrix,
const cv::Mat &distCoeffs, const ...</code></pre></hr/></hr/><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=115192#post-id-115192Then, when I find an object in the image, I just need to look at this coordMat in the right index to find the "real" coordinate.Mon, 28 Nov 2016 11:06:55 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=115192#post-id-115192Comment by Pedro Batista for <div class="snippet"><p>It should be possible if you know:</p>
<ul>
<li>the camera intrinsic parameters: <img alt="camera intrinsic parameters" src="/upfiles/14378614365891025.png"/></li>
<li>the camera pose: <img alt="camera pose" src="/upfiles/14378617706501256.png"/></li>
<li>the plane equation that contains the hole: <img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
</ul>
<p>For the plane equation, you should find a way to know the coordinates of 3 points in the world coordinate that lie on the same plane that contains the hole. </p>
<p>Then, you can change their coordinates to the camera frame knowing the camera pose and compute the <a href="https://en.wikipedia.org/wiki/Plane_%28geometry%29#Describing_a_plane_through_three_points">plane equation</a>.</p>
<h2>Steps</h2>
<p>Convert the 2D image coordinate of the hole in the normalized camera frame:
<img alt="image description" src="/upfiles/14378647785864535.png"/></p>
<p>Get the scale factor: <img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>The 3D coordinate of the hole in the camera frame is then: <img alt="image description" src="/upfiles/14378656146651749.png"/></p>
<hr/>
<h2>Plane Equation</h2>
<p>Different formulas:</p>
<ul>
<li><img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
<li><img alt="plane equation" src="/upfiles/143786366794169.png"/></li>
<li><img alt="plane equation" src="/upfiles/14380164155010323.png"/></li>
<li><img alt="plane equation" src="/upfiles/14378638339238191.png"/></li>
</ul>
<p>You can then identify the quadruplet <img alt="image description" src="/upfiles/14378639881327576.png"/> with the quadruplet <img alt="image description" src="/upfiles/14378643029500772.png"/>.</p>
<p>We can have a first plane that contains the hole and another plane parallel to the first but which passes by a point at a normalized coordinate z=1 (obtained from the 2d coordinate): <img alt="image description" src="/upfiles/14378647785864535.png"/>.</p>
<p>For the two plane equations, the coefficients a, b, c are the same, only the coefficient d is different.</p>
<p>The "scale factor" is then:
<img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>Edit:
Your case is a little easier as the camera is almost parallel to the surface. You could use the hole diameter to compute the "scale factor" knowing the ratio between the hole diameter in the real world and the hole diameter in pixel after a calibration step. </p>
<hr/>
<p><strong>Edit2 (2016/11/27):</strong></p>
<p>Here a full working example. The data used to estimate the camera intrinsic matrix can be found in the <a href="https://github.com/opencv/opencv/tree/3.1.0/samples/data">OpenCV sample data directory</a> (I used the left images). Should also be possible to do the same by computing the point from the intersection between the image ray and the plane.</p>
<p>The example code is a little bit long. What it does:</p>
<ul>
<li>extract 2D image corners using <code>cv::findChessboardCorners</code> (image used is <a href="https://github.com/opencv/opencv/blob/3.1.0/samples/data/left04.jpg">left04.jpg</a>)</li>
<li>compute the camera pose using <code>cv::solvePnP</code></li>
<li>check the camera pose by computing the RMS reprojection error</li>
<li>compute the plane equation from 3 points</li>
<li>compute the 3D point in camera and object frame using the 2D image coordinate, the plane equation and the camera pose and compute the RMS error</li>
<li>note: here the distorsion coefficients are not taken into account when computing the normalized camera coordinates</li>
</ul>
<p>Code:</p>
<pre><code>#include <iostream>
#include <opencv2/opencv.hpp>
//@ref: http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/
enum Pattern { CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
static void calcChessboardCorners(cv::Size boardSize, float squareSize, std::vector<cv::Point3f>& corners, Pattern patternType = CHESSBOARD) {
corners.resize(0);
switch(patternType) {
case CHESSBOARD:
case CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float(j*squareSize),
float(i*squareSize), 0));
break;
case ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float((2*j + i % 2)*squareSize),
float(i*squareSize), 0));
break;
default:
CV_Error(cv::Error::StsBadArg, "Unknown pattern type\n");
}
}
double checkCameraPose(const std::vector<cv::Point3f> &modelPts, const std::vector<cv::Point2f> &imagePts, const cv::Mat &cameraMatrix,
const cv::Mat &distCoeffs, const ...</code></pre></hr/></hr/><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=115190#post-id-115190std::vector<cv::Point3f> objPoints;
std::vector<cv::Point2f> imgPoints;
cv::Mat2f coordMat = cv::Mat2f::zeros(_height, _width);
for (float x = -0.1; x <= 1.1; x += 0.001)
{
for (float y = -0.1; y <= 1.1; y += 0.001)
{
objPoints.push_back(cv::Point3f(x,y,0));
}
}
cv::projectPoints(objPoints, _rVec, _tVec, _cameraMatrix, _distortionCoefs, imgPoints);
for (unsigned int n = 0; n != imgPoints.size(); n++)
{
cv::Point2f coord = cv::Point2f(objPoints[n].x, objPoints[n].y);
coordMat.at<cv::Point2f>(imgPoints[n]) = coord;
}Mon, 28 Nov 2016 11:05:04 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=115190#post-id-115190Comment by Eduardo for <div class="snippet"><p>It should be possible if you know:</p>
<ul>
<li>the camera intrinsic parameters: <img alt="camera intrinsic parameters" src="/upfiles/14378614365891025.png"/></li>
<li>the camera pose: <img alt="camera pose" src="/upfiles/14378617706501256.png"/></li>
<li>the plane equation that contains the hole: <img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
</ul>
<p>For the plane equation, you should find a way to know the coordinates of 3 points in the world coordinate that lie on the same plane that contains the hole. </p>
<p>Then, you can change their coordinates to the camera frame knowing the camera pose and compute the <a href="https://en.wikipedia.org/wiki/Plane_%28geometry%29#Describing_a_plane_through_three_points">plane equation</a>.</p>
<h2>Steps</h2>
<p>Convert the 2D image coordinate of the hole in the normalized camera frame:
<img alt="image description" src="/upfiles/14378647785864535.png"/></p>
<p>Get the scale factor: <img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>The 3D coordinate of the hole in the camera frame is then: <img alt="image description" src="/upfiles/14378656146651749.png"/></p>
<hr/>
<h2>Plane Equation</h2>
<p>Different formulas:</p>
<ul>
<li><img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
<li><img alt="plane equation" src="/upfiles/143786366794169.png"/></li>
<li><img alt="plane equation" src="/upfiles/14380164155010323.png"/></li>
<li><img alt="plane equation" src="/upfiles/14378638339238191.png"/></li>
</ul>
<p>You can then identify the quadruplet <img alt="image description" src="/upfiles/14378639881327576.png"/> with the quadruplet <img alt="image description" src="/upfiles/14378643029500772.png"/>.</p>
<p>We can have a first plane that contains the hole and another plane parallel to the first but which passes by a point at a normalized coordinate z=1 (obtained from the 2d coordinate): <img alt="image description" src="/upfiles/14378647785864535.png"/>.</p>
<p>For the two plane equations, the coefficients a, b, c are the same, only the coefficient d is different.</p>
<p>The "scale factor" is then:
<img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>Edit:
Your case is a little easier as the camera is almost parallel to the surface. You could use the hole diameter to compute the "scale factor" knowing the ratio between the hole diameter in the real world and the hole diameter in pixel after a calibration step. </p>
<hr/>
<p><strong>Edit2 (2016/11/27):</strong></p>
<p>Here a full working example. The data used to estimate the camera intrinsic matrix can be found in the <a href="https://github.com/opencv/opencv/tree/3.1.0/samples/data">OpenCV sample data directory</a> (I used the left images). Should also be possible to do the same by computing the point from the intersection between the image ray and the plane.</p>
<p>The example code is a little bit long. What it does:</p>
<ul>
<li>extract 2D image corners using <code>cv::findChessboardCorners</code> (image used is <a href="https://github.com/opencv/opencv/blob/3.1.0/samples/data/left04.jpg">left04.jpg</a>)</li>
<li>compute the camera pose using <code>cv::solvePnP</code></li>
<li>check the camera pose by computing the RMS reprojection error</li>
<li>compute the plane equation from 3 points</li>
<li>compute the 3D point in camera and object frame using the 2D image coordinate, the plane equation and the camera pose and compute the RMS error</li>
<li>note: here the distorsion coefficients are not taken into account when computing the normalized camera coordinates</li>
</ul>
<p>Code:</p>
<pre><code>#include <iostream>
#include <opencv2/opencv.hpp>
//@ref: http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/
enum Pattern { CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
static void calcChessboardCorners(cv::Size boardSize, float squareSize, std::vector<cv::Point3f>& corners, Pattern patternType = CHESSBOARD) {
corners.resize(0);
switch(patternType) {
case CHESSBOARD:
case CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float(j*squareSize),
float(i*squareSize), 0));
break;
case ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float((2*j + i % 2)*squareSize),
float(i*squareSize), 0));
break;
default:
CV_Error(cv::Error::StsBadArg, "Unknown pattern type\n");
}
}
double checkCameraPose(const std::vector<cv::Point3f> &modelPts, const std::vector<cv::Point2f> &imagePts, const cv::Mat &cameraMatrix,
const cv::Mat &distCoeffs, const ...</code></pre></hr/></hr/><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114300#post-id-114300For the camera pose, the easiest solution should be to use a marker tag (or multiples tags), or any solution where you can use `cv::solvePnP`.
Yes, the intrinsic parameters are the one from the calibration sample (note that I don't take into account the distorsion, it should be ok for camera with low distorsion).Tue, 22 Nov 2016 12:28:52 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114300#post-id-114300Comment by Eduardo for <div class="snippet"><p>It should be possible if you know:</p>
<ul>
<li>the camera intrinsic parameters: <img alt="camera intrinsic parameters" src="/upfiles/14378614365891025.png"/></li>
<li>the camera pose: <img alt="camera pose" src="/upfiles/14378617706501256.png"/></li>
<li>the plane equation that contains the hole: <img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
</ul>
<p>For the plane equation, you should find a way to know the coordinates of 3 points in the world coordinate that lie on the same plane that contains the hole. </p>
<p>Then, you can change their coordinates to the camera frame knowing the camera pose and compute the <a href="https://en.wikipedia.org/wiki/Plane_%28geometry%29#Describing_a_plane_through_three_points">plane equation</a>.</p>
<h2>Steps</h2>
<p>Convert the 2D image coordinate of the hole in the normalized camera frame:
<img alt="image description" src="/upfiles/14378647785864535.png"/></p>
<p>Get the scale factor: <img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>The 3D coordinate of the hole in the camera frame is then: <img alt="image description" src="/upfiles/14378656146651749.png"/></p>
<hr/>
<h2>Plane Equation</h2>
<p>Different formulas:</p>
<ul>
<li><img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
<li><img alt="plane equation" src="/upfiles/143786366794169.png"/></li>
<li><img alt="plane equation" src="/upfiles/14380164155010323.png"/></li>
<li><img alt="plane equation" src="/upfiles/14378638339238191.png"/></li>
</ul>
<p>You can then identify the quadruplet <img alt="image description" src="/upfiles/14378639881327576.png"/> with the quadruplet <img alt="image description" src="/upfiles/14378643029500772.png"/>.</p>
<p>We can have a first plane that contains the hole and another plane parallel to the first but which passes by a point at a normalized coordinate z=1 (obtained from the 2d coordinate): <img alt="image description" src="/upfiles/14378647785864535.png"/>.</p>
<p>For the two plane equations, the coefficients a, b, c are the same, only the coefficient d is different.</p>
<p>The "scale factor" is then:
<img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>Edit:
Your case is a little easier as the camera is almost parallel to the surface. You could use the hole diameter to compute the "scale factor" knowing the ratio between the hole diameter in the real world and the hole diameter in pixel after a calibration step. </p>
<hr/>
<p><strong>Edit2 (2016/11/27):</strong></p>
<p>Here a full working example. The data used to estimate the camera intrinsic matrix can be found in the <a href="https://github.com/opencv/opencv/tree/3.1.0/samples/data">OpenCV sample data directory</a> (I used the left images). Should also be possible to do the same by computing the point from the intersection between the image ray and the plane.</p>
<p>The example code is a little bit long. What it does:</p>
<ul>
<li>extract 2D image corners using <code>cv::findChessboardCorners</code> (image used is <a href="https://github.com/opencv/opencv/blob/3.1.0/samples/data/left04.jpg">left04.jpg</a>)</li>
<li>compute the camera pose using <code>cv::solvePnP</code></li>
<li>check the camera pose by computing the RMS reprojection error</li>
<li>compute the plane equation from 3 points</li>
<li>compute the 3D point in camera and object frame using the 2D image coordinate, the plane equation and the camera pose and compute the RMS error</li>
<li>note: here the distorsion coefficients are not taken into account when computing the normalized camera coordinates</li>
</ul>
<p>Code:</p>
<pre><code>#include <iostream>
#include <opencv2/opencv.hpp>
//@ref: http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/
enum Pattern { CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
static void calcChessboardCorners(cv::Size boardSize, float squareSize, std::vector<cv::Point3f>& corners, Pattern patternType = CHESSBOARD) {
corners.resize(0);
switch(patternType) {
case CHESSBOARD:
case CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float(j*squareSize),
float(i*squareSize), 0));
break;
case ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float((2*j + i % 2)*squareSize),
float(i*squareSize), 0));
break;
default:
CV_Error(cv::Error::StsBadArg, "Unknown pattern type\n");
}
}
double checkCameraPose(const std::vector<cv::Point3f> &modelPts, const std::vector<cv::Point2f> &imagePts, const cv::Mat &cameraMatrix,
const cv::Mat &distCoeffs, const ...</code></pre></hr/></hr/><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114386#post-id-114386Yes, I think that with this method you should be able to get what you want:
- the rotation and translation vectors should be up to a scale but it doesn't matter in your case if you are not interested in the true 3D coordinate
- from a 2D image coordinate, you will get the 3D in your object coordinate, in the same scale than [0,0] - [1,1].
- if you are successful, I am interested by an updateWed, 23 Nov 2016 08:55:11 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114386#post-id-114386Comment by Pedro Batista for <div class="snippet"><p>It should be possible if you know:</p>
<ul>
<li>the camera intrinsic parameters: <img alt="camera intrinsic parameters" src="/upfiles/14378614365891025.png"/></li>
<li>the camera pose: <img alt="camera pose" src="/upfiles/14378617706501256.png"/></li>
<li>the plane equation that contains the hole: <img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
</ul>
<p>For the plane equation, you should find a way to know the coordinates of 3 points in the world coordinate that lie on the same plane that contains the hole. </p>
<p>Then, you can change their coordinates to the camera frame knowing the camera pose and compute the <a href="https://en.wikipedia.org/wiki/Plane_%28geometry%29#Describing_a_plane_through_three_points">plane equation</a>.</p>
<h2>Steps</h2>
<p>Convert the 2D image coordinate of the hole in the normalized camera frame:
<img alt="image description" src="/upfiles/14378647785864535.png"/></p>
<p>Get the scale factor: <img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>The 3D coordinate of the hole in the camera frame is then: <img alt="image description" src="/upfiles/14378656146651749.png"/></p>
<hr/>
<h2>Plane Equation</h2>
<p>Different formulas:</p>
<ul>
<li><img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
<li><img alt="plane equation" src="/upfiles/143786366794169.png"/></li>
<li><img alt="plane equation" src="/upfiles/14380164155010323.png"/></li>
<li><img alt="plane equation" src="/upfiles/14378638339238191.png"/></li>
</ul>
<p>You can then identify the quadruplet <img alt="image description" src="/upfiles/14378639881327576.png"/> with the quadruplet <img alt="image description" src="/upfiles/14378643029500772.png"/>.</p>
<p>We can have a first plane that contains the hole and another plane parallel to the first but which passes by a point at a normalized coordinate z=1 (obtained from the 2d coordinate): <img alt="image description" src="/upfiles/14378647785864535.png"/>.</p>
<p>For the two plane equations, the coefficients a, b, c are the same, only the coefficient d is different.</p>
<p>The "scale factor" is then:
<img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>Edit:
Your case is a little easier as the camera is almost parallel to the surface. You could use the hole diameter to compute the "scale factor" knowing the ratio between the hole diameter in the real world and the hole diameter in pixel after a calibration step. </p>
<hr/>
<p><strong>Edit2 (2016/11/27):</strong></p>
<p>Here a full working example. The data used to estimate the camera intrinsic matrix can be found in the <a href="https://github.com/opencv/opencv/tree/3.1.0/samples/data">OpenCV sample data directory</a> (I used the left images). Should also be possible to do the same by computing the point from the intersection between the image ray and the plane.</p>
<p>The example code is a little bit long. What it does:</p>
<ul>
<li>extract 2D image corners using <code>cv::findChessboardCorners</code> (image used is <a href="https://github.com/opencv/opencv/blob/3.1.0/samples/data/left04.jpg">left04.jpg</a>)</li>
<li>compute the camera pose using <code>cv::solvePnP</code></li>
<li>check the camera pose by computing the RMS reprojection error</li>
<li>compute the plane equation from 3 points</li>
<li>compute the 3D point in camera and object frame using the 2D image coordinate, the plane equation and the camera pose and compute the RMS error</li>
<li>note: here the distorsion coefficients are not taken into account when computing the normalized camera coordinates</li>
</ul>
<p>Code:</p>
<pre><code>#include <iostream>
#include <opencv2/opencv.hpp>
//@ref: http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/
enum Pattern { CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
static void calcChessboardCorners(cv::Size boardSize, float squareSize, std::vector<cv::Point3f>& corners, Pattern patternType = CHESSBOARD) {
corners.resize(0);
switch(patternType) {
case CHESSBOARD:
case CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float(j*squareSize),
float(i*squareSize), 0));
break;
case ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float((2*j + i % 2)*squareSize),
float(i*squareSize), 0));
break;
default:
CV_Error(cv::Error::StsBadArg, "Unknown pattern type\n");
}
}
double checkCameraPose(const std::vector<cv::Point3f> &modelPts, const std::vector<cv::Point2f> &imagePts, const cv::Mat &cameraMatrix,
const cv::Mat &distCoeffs, const ...</code></pre></hr/></hr/><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114388#post-id-114388Do you know if the rotation and translation already have into account the distortion coeffs (Since I needed them for solvePnP)? I'm not sure how I should take the distortion into account when transforming image coordinates to plane coordinates.Wed, 23 Nov 2016 09:11:29 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114388#post-id-114388Comment by Pedro Batista for <div class="snippet"><p>It should be possible if you know:</p>
<ul>
<li>the camera intrinsic parameters: <img alt="camera intrinsic parameters" src="/upfiles/14378614365891025.png"/></li>
<li>the camera pose: <img alt="camera pose" src="/upfiles/14378617706501256.png"/></li>
<li>the plane equation that contains the hole: <img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
</ul>
<p>For the plane equation, you should find a way to know the coordinates of 3 points in the world coordinate that lie on the same plane that contains the hole. </p>
<p>Then, you can change their coordinates to the camera frame knowing the camera pose and compute the <a href="https://en.wikipedia.org/wiki/Plane_%28geometry%29#Describing_a_plane_through_three_points">plane equation</a>.</p>
<h2>Steps</h2>
<p>Convert the 2D image coordinate of the hole in the normalized camera frame:
<img alt="image description" src="/upfiles/14378647785864535.png"/></p>
<p>Get the scale factor: <img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>The 3D coordinate of the hole in the camera frame is then: <img alt="image description" src="/upfiles/14378656146651749.png"/></p>
<hr/>
<h2>Plane Equation</h2>
<p>Different formulas:</p>
<ul>
<li><img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
<li><img alt="plane equation" src="/upfiles/143786366794169.png"/></li>
<li><img alt="plane equation" src="/upfiles/14380164155010323.png"/></li>
<li><img alt="plane equation" src="/upfiles/14378638339238191.png"/></li>
</ul>
<p>You can then identify the quadruplet <img alt="image description" src="/upfiles/14378639881327576.png"/> with the quadruplet <img alt="image description" src="/upfiles/14378643029500772.png"/>.</p>
<p>We can have a first plane that contains the hole and another plane parallel to the first but which passes by a point at a normalized coordinate z=1 (obtained from the 2d coordinate): <img alt="image description" src="/upfiles/14378647785864535.png"/>.</p>
<p>For the two plane equations, the coefficients a, b, c are the same, only the coefficient d is different.</p>
<p>The "scale factor" is then:
<img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>Edit:
Your case is a little easier as the camera is almost parallel to the surface. You could use the hole diameter to compute the "scale factor" knowing the ratio between the hole diameter in the real world and the hole diameter in pixel after a calibration step. </p>
<hr/>
<p><strong>Edit2 (2016/11/27):</strong></p>
<p>Here a full working example. The data used to estimate the camera intrinsic matrix can be found in the <a href="https://github.com/opencv/opencv/tree/3.1.0/samples/data">OpenCV sample data directory</a> (I used the left images). Should also be possible to do the same by computing the point from the intersection between the image ray and the plane.</p>
<p>The example code is a little bit long. What it does:</p>
<ul>
<li>extract 2D image corners using <code>cv::findChessboardCorners</code> (image used is <a href="https://github.com/opencv/opencv/blob/3.1.0/samples/data/left04.jpg">left04.jpg</a>)</li>
<li>compute the camera pose using <code>cv::solvePnP</code></li>
<li>check the camera pose by computing the RMS reprojection error</li>
<li>compute the plane equation from 3 points</li>
<li>compute the 3D point in camera and object frame using the 2D image coordinate, the plane equation and the camera pose and compute the RMS error</li>
<li>note: here the distorsion coefficients are not taken into account when computing the normalized camera coordinates</li>
</ul>
<p>Code:</p>
<pre><code>#include <iostream>
#include <opencv2/opencv.hpp>
//@ref: http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/
enum Pattern { CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
static void calcChessboardCorners(cv::Size boardSize, float squareSize, std::vector<cv::Point3f>& corners, Pattern patternType = CHESSBOARD) {
corners.resize(0);
switch(patternType) {
case CHESSBOARD:
case CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float(j*squareSize),
float(i*squareSize), 0));
break;
case ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float((2*j + i % 2)*squareSize),
float(i*squareSize), 0));
break;
default:
CV_Error(cv::Error::StsBadArg, "Unknown pattern type\n");
}
}
double checkCameraPose(const std::vector<cv::Point3f> &modelPts, const std::vector<cv::Point2f> &imagePts, const cv::Mat &cameraMatrix,
const cv::Mat &distCoeffs, const ...</code></pre></hr/></hr/><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=115188#post-id-115188For some reason I really couldn't find a model to transform the coordinates. But your cv::projectPoints() verification gave me an idea. Since my calibrated surface is always the same, and number of possible object locations is finite (its limited to the number of pixels in the calibrated area), I just need to project ALL possible real coordinates in that area and use it to define a LuT to be consulted in real-time.
I did not give up yet in finding the model, I will keep trying to do it but for now this solution is working according to my needs (seems pretty accurate).Mon, 28 Nov 2016 11:02:57 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=115188#post-id-115188Comment by Pedro Batista for <div class="snippet"><p>It should be possible if you know:</p>
<ul>
<li>the camera intrinsic parameters: <img alt="camera intrinsic parameters" src="/upfiles/14378614365891025.png"/></li>
<li>the camera pose: <img alt="camera pose" src="/upfiles/14378617706501256.png"/></li>
<li>the plane equation that contains the hole: <img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
</ul>
<p>For the plane equation, you should find a way to know the coordinates of 3 points in the world coordinate that lie on the same plane that contains the hole. </p>
<p>Then, you can change their coordinates to the camera frame knowing the camera pose and compute the <a href="https://en.wikipedia.org/wiki/Plane_%28geometry%29#Describing_a_plane_through_three_points">plane equation</a>.</p>
<h2>Steps</h2>
<p>Convert the 2D image coordinate of the hole in the normalized camera frame:
<img alt="image description" src="/upfiles/14378647785864535.png"/></p>
<p>Get the scale factor: <img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>The 3D coordinate of the hole in the camera frame is then: <img alt="image description" src="/upfiles/14378656146651749.png"/></p>
<hr/>
<h2>Plane Equation</h2>
<p>Different formulas:</p>
<ul>
<li><img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
<li><img alt="plane equation" src="/upfiles/143786366794169.png"/></li>
<li><img alt="plane equation" src="/upfiles/14380164155010323.png"/></li>
<li><img alt="plane equation" src="/upfiles/14378638339238191.png"/></li>
</ul>
<p>You can then identify the quadruplet <img alt="image description" src="/upfiles/14378639881327576.png"/> with the quadruplet <img alt="image description" src="/upfiles/14378643029500772.png"/>.</p>
<p>We can have a first plane that contains the hole and another plane parallel to the first but which passes by a point at a normalized coordinate z=1 (obtained from the 2d coordinate): <img alt="image description" src="/upfiles/14378647785864535.png"/>.</p>
<p>For the two plane equations, the coefficients a, b, c are the same, only the coefficient d is different.</p>
<p>The "scale factor" is then:
<img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>Edit:
Your case is a little easier as the camera is almost parallel to the surface. You could use the hole diameter to compute the "scale factor" knowing the ratio between the hole diameter in the real world and the hole diameter in pixel after a calibration step. </p>
<hr/>
<p><strong>Edit2 (2016/11/27):</strong></p>
<p>Here a full working example. The data used to estimate the camera intrinsic matrix can be found in the <a href="https://github.com/opencv/opencv/tree/3.1.0/samples/data">OpenCV sample data directory</a> (I used the left images). Should also be possible to do the same by computing the point from the intersection between the image ray and the plane.</p>
<p>The example code is a little bit long. What it does:</p>
<ul>
<li>extract 2D image corners using <code>cv::findChessboardCorners</code> (image used is <a href="https://github.com/opencv/opencv/blob/3.1.0/samples/data/left04.jpg">left04.jpg</a>)</li>
<li>compute the camera pose using <code>cv::solvePnP</code></li>
<li>check the camera pose by computing the RMS reprojection error</li>
<li>compute the plane equation from 3 points</li>
<li>compute the 3D point in camera and object frame using the 2D image coordinate, the plane equation and the camera pose and compute the RMS error</li>
<li>note: here the distorsion coefficients are not taken into account when computing the normalized camera coordinates</li>
</ul>
<p>Code:</p>
<pre><code>#include <iostream>
#include <opencv2/opencv.hpp>
//@ref: http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/
enum Pattern { CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
static void calcChessboardCorners(cv::Size boardSize, float squareSize, std::vector<cv::Point3f>& corners, Pattern patternType = CHESSBOARD) {
corners.resize(0);
switch(patternType) {
case CHESSBOARD:
case CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float(j*squareSize),
float(i*squareSize), 0));
break;
case ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float((2*j + i % 2)*squareSize),
float(i*squareSize), 0));
break;
default:
CV_Error(cv::Error::StsBadArg, "Unknown pattern type\n");
}
}
double checkCameraPose(const std::vector<cv::Point3f> &modelPts, const std::vector<cv::Point2f> &imagePts, const cv::Mat &cameraMatrix,
const cv::Mat &distCoeffs, const ...</code></pre></hr/></hr/><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114295#post-id-114295How can I obtain the camera pose (rotation and translation) in relation to my plane? And the intrinsic parameters, can I obtain them using the camera calibration sample?Tue, 22 Nov 2016 11:04:50 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114295#post-id-114295Comment by Eduardo for <div class="snippet"><p>It should be possible if you know:</p>
<ul>
<li>the camera intrinsic parameters: <img alt="camera intrinsic parameters" src="/upfiles/14378614365891025.png"/></li>
<li>the camera pose: <img alt="camera pose" src="/upfiles/14378617706501256.png"/></li>
<li>the plane equation that contains the hole: <img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
</ul>
<p>For the plane equation, you should find a way to know the coordinates of 3 points in the world coordinate that lie on the same plane that contains the hole. </p>
<p>Then, you can change their coordinates to the camera frame knowing the camera pose and compute the <a href="https://en.wikipedia.org/wiki/Plane_%28geometry%29#Describing_a_plane_through_three_points">plane equation</a>.</p>
<h2>Steps</h2>
<p>Convert the 2D image coordinate of the hole in the normalized camera frame:
<img alt="image description" src="/upfiles/14378647785864535.png"/></p>
<p>Get the scale factor: <img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>The 3D coordinate of the hole in the camera frame is then: <img alt="image description" src="/upfiles/14378656146651749.png"/></p>
<hr/>
<h2>Plane Equation</h2>
<p>Different formulas:</p>
<ul>
<li><img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
<li><img alt="plane equation" src="/upfiles/143786366794169.png"/></li>
<li><img alt="plane equation" src="/upfiles/14380164155010323.png"/></li>
<li><img alt="plane equation" src="/upfiles/14378638339238191.png"/></li>
</ul>
<p>You can then identify the quadruplet <img alt="image description" src="/upfiles/14378639881327576.png"/> with the quadruplet <img alt="image description" src="/upfiles/14378643029500772.png"/>.</p>
<p>We can have a first plane that contains the hole and another plane parallel to the first but which passes by a point at a normalized coordinate z=1 (obtained from the 2d coordinate): <img alt="image description" src="/upfiles/14378647785864535.png"/>.</p>
<p>For the two plane equations, the coefficients a, b, c are the same, only the coefficient d is different.</p>
<p>The "scale factor" is then:
<img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>Edit:
Your case is a little easier as the camera is almost parallel to the surface. You could use the hole diameter to compute the "scale factor" knowing the ratio between the hole diameter in the real world and the hole diameter in pixel after a calibration step. </p>
<hr/>
<p><strong>Edit2 (2016/11/27):</strong></p>
<p>Here a full working example. The data used to estimate the camera intrinsic matrix can be found in the <a href="https://github.com/opencv/opencv/tree/3.1.0/samples/data">OpenCV sample data directory</a> (I used the left images). Should also be possible to do the same by computing the point from the intersection between the image ray and the plane.</p>
<p>The example code is a little bit long. What it does:</p>
<ul>
<li>extract 2D image corners using <code>cv::findChessboardCorners</code> (image used is <a href="https://github.com/opencv/opencv/blob/3.1.0/samples/data/left04.jpg">left04.jpg</a>)</li>
<li>compute the camera pose using <code>cv::solvePnP</code></li>
<li>check the camera pose by computing the RMS reprojection error</li>
<li>compute the plane equation from 3 points</li>
<li>compute the 3D point in camera and object frame using the 2D image coordinate, the plane equation and the camera pose and compute the RMS error</li>
<li>note: here the distorsion coefficients are not taken into account when computing the normalized camera coordinates</li>
</ul>
<p>Code:</p>
<pre><code>#include <iostream>
#include <opencv2/opencv.hpp>
//@ref: http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/
enum Pattern { CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
static void calcChessboardCorners(cv::Size boardSize, float squareSize, std::vector<cv::Point3f>& corners, Pattern patternType = CHESSBOARD) {
corners.resize(0);
switch(patternType) {
case CHESSBOARD:
case CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float(j*squareSize),
float(i*squareSize), 0));
break;
case ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float((2*j + i % 2)*squareSize),
float(i*squareSize), 0));
break;
default:
CV_Error(cv::Error::StsBadArg, "Unknown pattern type\n");
}
}
double checkCameraPose(const std::vector<cv::Point3f> &modelPts, const std::vector<cv::Point2f> &imagePts, const cv::Mat &cameraMatrix,
const cv::Mat &distCoeffs, const ...</code></pre></hr/></hr/><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114281#post-id-114281Yes, it should work in both cases.Tue, 22 Nov 2016 07:51:51 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114281#post-id-114281Comment by Pedro Batista for <div class="snippet"><p>It should be possible if you know:</p>
<ul>
<li>the camera intrinsic parameters: <img alt="camera intrinsic parameters" src="/upfiles/14378614365891025.png"/></li>
<li>the camera pose: <img alt="camera pose" src="/upfiles/14378617706501256.png"/></li>
<li>the plane equation that contains the hole: <img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
</ul>
<p>For the plane equation, you should find a way to know the coordinates of 3 points in the world coordinate that lie on the same plane that contains the hole. </p>
<p>Then, you can change their coordinates to the camera frame knowing the camera pose and compute the <a href="https://en.wikipedia.org/wiki/Plane_%28geometry%29#Describing_a_plane_through_three_points">plane equation</a>.</p>
<h2>Steps</h2>
<p>Convert the 2D image coordinate of the hole in the normalized camera frame:
<img alt="image description" src="/upfiles/14378647785864535.png"/></p>
<p>Get the scale factor: <img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>The 3D coordinate of the hole in the camera frame is then: <img alt="image description" src="/upfiles/14378656146651749.png"/></p>
<hr/>
<h2>Plane Equation</h2>
<p>Different formulas:</p>
<ul>
<li><img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
<li><img alt="plane equation" src="/upfiles/143786366794169.png"/></li>
<li><img alt="plane equation" src="/upfiles/14380164155010323.png"/></li>
<li><img alt="plane equation" src="/upfiles/14378638339238191.png"/></li>
</ul>
<p>You can then identify the quadruplet <img alt="image description" src="/upfiles/14378639881327576.png"/> with the quadruplet <img alt="image description" src="/upfiles/14378643029500772.png"/>.</p>
<p>We can have a first plane that contains the hole and another plane parallel to the first but which passes by a point at a normalized coordinate z=1 (obtained from the 2d coordinate): <img alt="image description" src="/upfiles/14378647785864535.png"/>.</p>
<p>For the two plane equations, the coefficients a, b, c are the same, only the coefficient d is different.</p>
<p>The "scale factor" is then:
<img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>Edit:
Your case is a little easier as the camera is almost parallel to the surface. You could use the hole diameter to compute the "scale factor" knowing the ratio between the hole diameter in the real world and the hole diameter in pixel after a calibration step. </p>
<hr/>
<p><strong>Edit2 (2016/11/27):</strong></p>
<p>Here a full working example. The data used to estimate the camera intrinsic matrix can be found in the <a href="https://github.com/opencv/opencv/tree/3.1.0/samples/data">OpenCV sample data directory</a> (I used the left images). Should also be possible to do the same by computing the point from the intersection between the image ray and the plane.</p>
<p>The example code is a little bit long. What it does:</p>
<ul>
<li>extract 2D image corners using <code>cv::findChessboardCorners</code> (image used is <a href="https://github.com/opencv/opencv/blob/3.1.0/samples/data/left04.jpg">left04.jpg</a>)</li>
<li>compute the camera pose using <code>cv::solvePnP</code></li>
<li>check the camera pose by computing the RMS reprojection error</li>
<li>compute the plane equation from 3 points</li>
<li>compute the 3D point in camera and object frame using the 2D image coordinate, the plane equation and the camera pose and compute the RMS error</li>
<li>note: here the distorsion coefficients are not taken into account when computing the normalized camera coordinates</li>
</ul>
<p>Code:</p>
<pre><code>#include <iostream>
#include <opencv2/opencv.hpp>
//@ref: http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/
enum Pattern { CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
static void calcChessboardCorners(cv::Size boardSize, float squareSize, std::vector<cv::Point3f>& corners, Pattern patternType = CHESSBOARD) {
corners.resize(0);
switch(patternType) {
case CHESSBOARD:
case CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float(j*squareSize),
float(i*squareSize), 0));
break;
case ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float((2*j + i % 2)*squareSize),
float(i*squareSize), 0));
break;
default:
CV_Error(cv::Error::StsBadArg, "Unknown pattern type\n");
}
}
double checkCameraPose(const std::vector<cv::Point3f> &modelPts, const std::vector<cv::Point2f> &imagePts, const cv::Mat &cameraMatrix,
const cv::Mat &distCoeffs, const ...</code></pre></hr/></hr/><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114270#post-id-114270This is valid whether the camera is normal to the plane or not, right?Tue, 22 Nov 2016 04:40:13 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114270#post-id-114270Comment by Eduardo for <div class="snippet"><p>It should be possible if you know:</p>
<ul>
<li>the camera intrinsic parameters: <img alt="camera intrinsic parameters" src="/upfiles/14378614365891025.png"/></li>
<li>the camera pose: <img alt="camera pose" src="/upfiles/14378617706501256.png"/></li>
<li>the plane equation that contains the hole: <img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
</ul>
<p>For the plane equation, you should find a way to know the coordinates of 3 points in the world coordinate that lie on the same plane that contains the hole. </p>
<p>Then, you can change their coordinates to the camera frame knowing the camera pose and compute the <a href="https://en.wikipedia.org/wiki/Plane_%28geometry%29#Describing_a_plane_through_three_points">plane equation</a>.</p>
<h2>Steps</h2>
<p>Convert the 2D image coordinate of the hole in the normalized camera frame:
<img alt="image description" src="/upfiles/14378647785864535.png"/></p>
<p>Get the scale factor: <img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>The 3D coordinate of the hole in the camera frame is then: <img alt="image description" src="/upfiles/14378656146651749.png"/></p>
<hr/>
<h2>Plane Equation</h2>
<p>Different formulas:</p>
<ul>
<li><img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
<li><img alt="plane equation" src="/upfiles/143786366794169.png"/></li>
<li><img alt="plane equation" src="/upfiles/14380164155010323.png"/></li>
<li><img alt="plane equation" src="/upfiles/14378638339238191.png"/></li>
</ul>
<p>You can then identify the quadruplet <img alt="image description" src="/upfiles/14378639881327576.png"/> with the quadruplet <img alt="image description" src="/upfiles/14378643029500772.png"/>.</p>
<p>We can have a first plane that contains the hole and another plane parallel to the first but which passes by a point at a normalized coordinate z=1 (obtained from the 2d coordinate): <img alt="image description" src="/upfiles/14378647785864535.png"/>.</p>
<p>For the two plane equations, the coefficients a, b, c are the same, only the coefficient d is different.</p>
<p>The "scale factor" is then:
<img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>Edit:
Your case is a little easier as the camera is almost parallel to the surface. You could use the hole diameter to compute the "scale factor" knowing the ratio between the hole diameter in the real world and the hole diameter in pixel after a calibration step. </p>
<hr/>
<p><strong>Edit2 (2016/11/27):</strong></p>
<p>Here a full working example. The data used to estimate the camera intrinsic matrix can be found in the <a href="https://github.com/opencv/opencv/tree/3.1.0/samples/data">OpenCV sample data directory</a> (I used the left images). Should also be possible to do the same by computing the point from the intersection between the image ray and the plane.</p>
<p>The example code is a little bit long. What it does:</p>
<ul>
<li>extract 2D image corners using <code>cv::findChessboardCorners</code> (image used is <a href="https://github.com/opencv/opencv/blob/3.1.0/samples/data/left04.jpg">left04.jpg</a>)</li>
<li>compute the camera pose using <code>cv::solvePnP</code></li>
<li>check the camera pose by computing the RMS reprojection error</li>
<li>compute the plane equation from 3 points</li>
<li>compute the 3D point in camera and object frame using the 2D image coordinate, the plane equation and the camera pose and compute the RMS error</li>
<li>note: here the distorsion coefficients are not taken into account when computing the normalized camera coordinates</li>
</ul>
<p>Code:</p>
<pre><code>#include <iostream>
#include <opencv2/opencv.hpp>
//@ref: http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/
enum Pattern { CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
static void calcChessboardCorners(cv::Size boardSize, float squareSize, std::vector<cv::Point3f>& corners, Pattern patternType = CHESSBOARD) {
corners.resize(0);
switch(patternType) {
case CHESSBOARD:
case CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float(j*squareSize),
float(i*squareSize), 0));
break;
case ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float((2*j + i % 2)*squareSize),
float(i*squareSize), 0));
break;
default:
CV_Error(cv::Error::StsBadArg, "Unknown pattern type\n");
}
}
double checkCameraPose(const std::vector<cv::Point3f> &modelPts, const std::vector<cv::Point2f> &imagePts, const cv::Mat &cameraMatrix,
const cv::Mat &distCoeffs, const ...</code></pre></hr/></hr/><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114394#post-id-114394[cv::solvePnP](http://docs.opencv.org/3.1.0/d9/d0c/group__calib3d.html#ga549c2075fac14829ff4a58bc931c033d) takes the distorsion coefficients. Another solution is to undistort the image and pass a zero-distorsion coefficient vector.Wed, 23 Nov 2016 10:12:34 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114394#post-id-114394Comment by Pedro Batista for <div class="snippet"><p>It should be possible if you know:</p>
<ul>
<li>the camera intrinsic parameters: <img alt="camera intrinsic parameters" src="/upfiles/14378614365891025.png"/></li>
<li>the camera pose: <img alt="camera pose" src="/upfiles/14378617706501256.png"/></li>
<li>the plane equation that contains the hole: <img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
</ul>
<p>For the plane equation, you should find a way to know the coordinates of 3 points in the world coordinate that lie on the same plane that contains the hole. </p>
<p>Then, you can change their coordinates to the camera frame knowing the camera pose and compute the <a href="https://en.wikipedia.org/wiki/Plane_%28geometry%29#Describing_a_plane_through_three_points">plane equation</a>.</p>
<h2>Steps</h2>
<p>Convert the 2D image coordinate of the hole in the normalized camera frame:
<img alt="image description" src="/upfiles/14378647785864535.png"/></p>
<p>Get the scale factor: <img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>The 3D coordinate of the hole in the camera frame is then: <img alt="image description" src="/upfiles/14378656146651749.png"/></p>
<hr/>
<h2>Plane Equation</h2>
<p>Different formulas:</p>
<ul>
<li><img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
<li><img alt="plane equation" src="/upfiles/143786366794169.png"/></li>
<li><img alt="plane equation" src="/upfiles/14380164155010323.png"/></li>
<li><img alt="plane equation" src="/upfiles/14378638339238191.png"/></li>
</ul>
<p>You can then identify the quadruplet <img alt="image description" src="/upfiles/14378639881327576.png"/> with the quadruplet <img alt="image description" src="/upfiles/14378643029500772.png"/>.</p>
<p>We can have a first plane that contains the hole and another plane parallel to the first but which passes by a point at a normalized coordinate z=1 (obtained from the 2d coordinate): <img alt="image description" src="/upfiles/14378647785864535.png"/>.</p>
<p>For the two plane equations, the coefficients a, b, c are the same, only the coefficient d is different.</p>
<p>The "scale factor" is then:
<img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>Edit:
Your case is a little easier as the camera is almost parallel to the surface. You could use the hole diameter to compute the "scale factor" knowing the ratio between the hole diameter in the real world and the hole diameter in pixel after a calibration step. </p>
<hr/>
<p><strong>Edit2 (2016/11/27):</strong></p>
<p>Here a full working example. The data used to estimate the camera intrinsic matrix can be found in the <a href="https://github.com/opencv/opencv/tree/3.1.0/samples/data">OpenCV sample data directory</a> (I used the left images). Should also be possible to do the same by computing the point from the intersection between the image ray and the plane.</p>
<p>The example code is a little bit long. What it does:</p>
<ul>
<li>extract 2D image corners using <code>cv::findChessboardCorners</code> (image used is <a href="https://github.com/opencv/opencv/blob/3.1.0/samples/data/left04.jpg">left04.jpg</a>)</li>
<li>compute the camera pose using <code>cv::solvePnP</code></li>
<li>check the camera pose by computing the RMS reprojection error</li>
<li>compute the plane equation from 3 points</li>
<li>compute the 3D point in camera and object frame using the 2D image coordinate, the plane equation and the camera pose and compute the RMS error</li>
<li>note: here the distorsion coefficients are not taken into account when computing the normalized camera coordinates</li>
</ul>
<p>Code:</p>
<pre><code>#include <iostream>
#include <opencv2/opencv.hpp>
//@ref: http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/
enum Pattern { CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
static void calcChessboardCorners(cv::Size boardSize, float squareSize, std::vector<cv::Point3f>& corners, Pattern patternType = CHESSBOARD) {
corners.resize(0);
switch(patternType) {
case CHESSBOARD:
case CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float(j*squareSize),
float(i*squareSize), 0));
break;
case ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float((2*j + i % 2)*squareSize),
float(i*squareSize), 0));
break;
default:
CV_Error(cv::Error::StsBadArg, "Unknown pattern type\n");
}
}
double checkCameraPose(const std::vector<cv::Point3f> &modelPts, const std::vector<cv::Point2f> &imagePts, const cv::Mat &cameraMatrix,
const cv::Mat &distCoeffs, const ...</code></pre></hr/></hr/><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114353#post-id-114353Is my reasoning correct?Wed, 23 Nov 2016 05:23:27 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114353#post-id-114353Comment by Pedro Batista for <div class="snippet"><p>It should be possible if you know:</p>
<ul>
<li>the camera intrinsic parameters: <img alt="camera intrinsic parameters" src="/upfiles/14378614365891025.png"/></li>
<li>the camera pose: <img alt="camera pose" src="/upfiles/14378617706501256.png"/></li>
<li>the plane equation that contains the hole: <img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
</ul>
<p>For the plane equation, you should find a way to know the coordinates of 3 points in the world coordinate that lie on the same plane that contains the hole. </p>
<p>Then, you can change their coordinates to the camera frame knowing the camera pose and compute the <a href="https://en.wikipedia.org/wiki/Plane_%28geometry%29#Describing_a_plane_through_three_points">plane equation</a>.</p>
<h2>Steps</h2>
<p>Convert the 2D image coordinate of the hole in the normalized camera frame:
<img alt="image description" src="/upfiles/14378647785864535.png"/></p>
<p>Get the scale factor: <img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>The 3D coordinate of the hole in the camera frame is then: <img alt="image description" src="/upfiles/14378656146651749.png"/></p>
<hr/>
<h2>Plane Equation</h2>
<p>Different formulas:</p>
<ul>
<li><img alt="plane equation" src="/upfiles/1437864225927213.png"/></li>
<li><img alt="plane equation" src="/upfiles/143786366794169.png"/></li>
<li><img alt="plane equation" src="/upfiles/14380164155010323.png"/></li>
<li><img alt="plane equation" src="/upfiles/14378638339238191.png"/></li>
</ul>
<p>You can then identify the quadruplet <img alt="image description" src="/upfiles/14378639881327576.png"/> with the quadruplet <img alt="image description" src="/upfiles/14378643029500772.png"/>.</p>
<p>We can have a first plane that contains the hole and another plane parallel to the first but which passes by a point at a normalized coordinate z=1 (obtained from the 2d coordinate): <img alt="image description" src="/upfiles/14378647785864535.png"/>.</p>
<p>For the two plane equations, the coefficients a, b, c are the same, only the coefficient d is different.</p>
<p>The "scale factor" is then:
<img alt="image description" src="/upfiles/14378654586431079.png"/></p>
<p>Edit:
Your case is a little easier as the camera is almost parallel to the surface. You could use the hole diameter to compute the "scale factor" knowing the ratio between the hole diameter in the real world and the hole diameter in pixel after a calibration step. </p>
<hr/>
<p><strong>Edit2 (2016/11/27):</strong></p>
<p>Here a full working example. The data used to estimate the camera intrinsic matrix can be found in the <a href="https://github.com/opencv/opencv/tree/3.1.0/samples/data">OpenCV sample data directory</a> (I used the left images). Should also be possible to do the same by computing the point from the intersection between the image ray and the plane.</p>
<p>The example code is a little bit long. What it does:</p>
<ul>
<li>extract 2D image corners using <code>cv::findChessboardCorners</code> (image used is <a href="https://github.com/opencv/opencv/blob/3.1.0/samples/data/left04.jpg">left04.jpg</a>)</li>
<li>compute the camera pose using <code>cv::solvePnP</code></li>
<li>check the camera pose by computing the RMS reprojection error</li>
<li>compute the plane equation from 3 points</li>
<li>compute the 3D point in camera and object frame using the 2D image coordinate, the plane equation and the camera pose and compute the RMS error</li>
<li>note: here the distorsion coefficients are not taken into account when computing the normalized camera coordinates</li>
</ul>
<p>Code:</p>
<pre><code>#include <iostream>
#include <opencv2/opencv.hpp>
//@ref: http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/
enum Pattern { CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
static void calcChessboardCorners(cv::Size boardSize, float squareSize, std::vector<cv::Point3f>& corners, Pattern patternType = CHESSBOARD) {
corners.resize(0);
switch(patternType) {
case CHESSBOARD:
case CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float(j*squareSize),
float(i*squareSize), 0));
break;
case ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(cv::Point3f(float((2*j + i % 2)*squareSize),
float(i*squareSize), 0));
break;
default:
CV_Error(cv::Error::StsBadArg, "Unknown pattern type\n");
}
}
double checkCameraPose(const std::vector<cv::Point3f> &modelPts, const std::vector<cv::Point2f> &imagePts, const cv::Mat &cameraMatrix,
const cv::Mat &distCoeffs, const ...</code></pre></hr/></hr/><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114352#post-id-114352I have a camera looking at a plane with 4 defined corners (I know the image coordinates of these corners). I also know the camera intrinsic parameters and distortion coeffs.
I can detect flat objects on this plane, but now I'd like to know its coordinates in the plane's coordinate system: This coordinate system starts at [0,0] in one of the corners, and ends at [1,1] at the opposite corner.
So, if I understood your answer correctly, first thing I need to know is the Translation and Rotation vecs, which can be obtained by using the cv::solvePnP function. I assume that I can use the four known corners and use my [0,0] - [1,1] system as objectPoitns, and in the image points I input the corresponding known image coordinates. After this I should have everything I need to to get object coords.Wed, 23 Nov 2016 05:22:22 -0600http://answers.opencv.org/question/67008/can-i-get-2d-world-coordinates-from-a-single-image-uv-coords/?comment=114352#post-id-114352