OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Mon, 05 Aug 2019 09:28:22 -0500more accuracy getAffineTransform()http://answers.opencv.org/question/216500/more-accuracy-getaffinetransform/Hi all,
The getAffineTransform and invertAffineTransform output the transformation matrix with dtype='float64'.
Is there anyway to make it output more accuracy? say dtype='float128'?
I MAY need more accuracy in my applications.
In my application, I choose three points with gps locations and xy-points in the image to compute the matrix and I compute the inverse_matrix too.
Then, I use the same three points and the inverse_matrix, input the xy values and compute them back to the gps locations.
The largest error between the computed gps location and measured gps location is about
3.04283202e-06 which is not really bad. (about 30cm )
M2 = np.array([51, 788, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2)
# p = inv_M * p'
diff = result - np.array([23.90368083, 121.53650361]).reshape(2,1)
print diff
[[-2.86084081e-08]
[ 3.04283202e-06]]
But for other test points, the errors are too much.
For example, let's see the bottom point (gps 23.90377194, 121.53645972).
The error is 0.00021149 in longitude which is too much (**about 21 meters**).
M2 = np.array([910, 958, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2) # p = inv_M * p'
diff = result - np.array([23.90377194, 121.53645972]).reshape(2,1) //compare to the measured gps location
print diff
[[0.00015057]
[0.00021149]]
Here is my ipynb
[link text](https://github.com/wennycooper/hualien/blob/master/1010.ipynb)
original image // you can use mouse to get the xy values of feature points.
[link text](https://github.com/wennycooper/hualien/blob/master/1010.jpg)
feature points with gps locations // note that this is a resized diagram, the xy value is meanless.
[link text](https://github.com/wennycooper/hualien/blob/master/1010_with_gps_locations.png)
the gps locations were provided by vendor and they claimed the errors should be <= 30cm...
To check if the gps locations is trustable or not, I tried to plot those gps locations to ROS rviz and check their relative locations to the labeled image. Finally, I think the gps locations is trustable,
here is the png for checking gps locations [link text](https://github.com/wennycooper/hualien/blob/master/gps_locations.png)
Any idea?Fri, 02 Aug 2019 03:50:39 -0500http://answers.opencv.org/question/216500/more-accuracy-getaffinetransform/Comment by Kevin Kuei for <p>Hi all,</p>
<p>The getAffineTransform and invertAffineTransform output the transformation matrix with dtype='float64'.
Is there anyway to make it output more accuracy? say dtype='float128'?
I MAY need more accuracy in my applications.</p>
<p>In my application, I choose three points with gps locations and xy-points in the image to compute the matrix and I compute the inverse_matrix too.
Then, I use the same three points and the inverse_matrix, input the xy values and compute them back to the gps locations.
The largest error between the computed gps location and measured gps location is about <br>
3.04283202e-06 which is not really bad. (about 30cm )</p>
<pre><code>M2 = np.array([51, 788, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2)
# p = inv_M * p'
diff = result - np.array([23.90368083, 121.53650361]).reshape(2,1)
print diff
[[-2.86084081e-08]
[ 3.04283202e-06]]
</code></pre>
<p>But for other test points, the errors are too much.
For example, let's see the bottom point (gps 23.90377194, 121.53645972).
The error is 0.00021149 in longitude which is too much (<strong>about 21 meters</strong>). </p>
<pre><code>M2 = np.array([910, 958, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2) # p = inv_M * p'
diff = result - np.array([23.90377194, 121.53645972]).reshape(2,1) //compare to the measured gps location
print diff
[[0.00015057]
[0.00021149]]
</code></pre>
<p>Here is my ipynb
<a href="https://github.com/wennycooper/hualien/blob/master/1010.ipynb">link text</a></p>
<p>original image // you can use mouse to get the xy values of feature points.
<a href="https://github.com/wennycooper/hualien/blob/master/1010.jpg">link text</a></p>
<p>feature points with gps locations // note that this is a resized diagram, the xy value is meanless.
<a href="https://github.com/wennycooper/hualien/blob/master/1010_with_gps_locations.png">link text</a>
the gps locations were provided by vendor and they claimed the errors should be <= 30cm...</p>
<p>To check if the gps locations is trustable or not, I tried to plot those gps locations to ROS rviz and check their relative locations to the labeled image. Finally, I think the gps locations is trustable,</p>
<p>here is the png for checking gps locations <a href="https://github.com/wennycooper/hualien/blob/master/gps_locations.png">link text</a></p>
<p>Any idea?</p>
http://answers.opencv.org/question/216500/more-accuracy-getaffinetransform/?comment=216586#post-id-216586I think the pinpointing and the image is not the problem.
Even, I tried to plot those gps locations to ROS rviz ahd check their locations with my eyes. I think the gps locations is trustable,Mon, 05 Aug 2019 03:12:32 -0500http://answers.opencv.org/question/216500/more-accuracy-getaffinetransform/?comment=216586#post-id-216586Comment by LBerger for <p>Hi all,</p>
<p>The getAffineTransform and invertAffineTransform output the transformation matrix with dtype='float64'.
Is there anyway to make it output more accuracy? say dtype='float128'?
I MAY need more accuracy in my applications.</p>
<p>In my application, I choose three points with gps locations and xy-points in the image to compute the matrix and I compute the inverse_matrix too.
Then, I use the same three points and the inverse_matrix, input the xy values and compute them back to the gps locations.
The largest error between the computed gps location and measured gps location is about <br>
3.04283202e-06 which is not really bad. (about 30cm )</p>
<pre><code>M2 = np.array([51, 788, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2)
# p = inv_M * p'
diff = result - np.array([23.90368083, 121.53650361]).reshape(2,1)
print diff
[[-2.86084081e-08]
[ 3.04283202e-06]]
</code></pre>
<p>But for other test points, the errors are too much.
For example, let's see the bottom point (gps 23.90377194, 121.53645972).
The error is 0.00021149 in longitude which is too much (<strong>about 21 meters</strong>). </p>
<pre><code>M2 = np.array([910, 958, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2) # p = inv_M * p'
diff = result - np.array([23.90377194, 121.53645972]).reshape(2,1) //compare to the measured gps location
print diff
[[0.00015057]
[0.00021149]]
</code></pre>
<p>Here is my ipynb
<a href="https://github.com/wennycooper/hualien/blob/master/1010.ipynb">link text</a></p>
<p>original image // you can use mouse to get the xy values of feature points.
<a href="https://github.com/wennycooper/hualien/blob/master/1010.jpg">link text</a></p>
<p>feature points with gps locations // note that this is a resized diagram, the xy value is meanless.
<a href="https://github.com/wennycooper/hualien/blob/master/1010_with_gps_locations.png">link text</a>
the gps locations were provided by vendor and they claimed the errors should be <= 30cm...</p>
<p>To check if the gps locations is trustable or not, I tried to plot those gps locations to ROS rviz and check their relative locations to the labeled image. Finally, I think the gps locations is trustable,</p>
<p>here is the png for checking gps locations <a href="https://github.com/wennycooper/hualien/blob/master/gps_locations.png">link text</a></p>
<p>Any idea?</p>
http://answers.opencv.org/question/216500/more-accuracy-getaffinetransform/?comment=216585#post-id-216585I think your question is off topics. Here it's an opencv forum. You should read [this](https://www.aboutcivil.org/sources-of-errors-in-gps.html). As @witek wrote may be problem is in data precision and not in matrix inversionMon, 05 Aug 2019 03:03:59 -0500http://answers.opencv.org/question/216500/more-accuracy-getaffinetransform/?comment=216585#post-id-216585Comment by Kevin Kuei for <p>Hi all,</p>
<p>The getAffineTransform and invertAffineTransform output the transformation matrix with dtype='float64'.
Is there anyway to make it output more accuracy? say dtype='float128'?
I MAY need more accuracy in my applications.</p>
<p>In my application, I choose three points with gps locations and xy-points in the image to compute the matrix and I compute the inverse_matrix too.
Then, I use the same three points and the inverse_matrix, input the xy values and compute them back to the gps locations.
The largest error between the computed gps location and measured gps location is about <br>
3.04283202e-06 which is not really bad. (about 30cm )</p>
<pre><code>M2 = np.array([51, 788, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2)
# p = inv_M * p'
diff = result - np.array([23.90368083, 121.53650361]).reshape(2,1)
print diff
[[-2.86084081e-08]
[ 3.04283202e-06]]
</code></pre>
<p>But for other test points, the errors are too much.
For example, let's see the bottom point (gps 23.90377194, 121.53645972).
The error is 0.00021149 in longitude which is too much (<strong>about 21 meters</strong>). </p>
<pre><code>M2 = np.array([910, 958, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2) # p = inv_M * p'
diff = result - np.array([23.90377194, 121.53645972]).reshape(2,1) //compare to the measured gps location
print diff
[[0.00015057]
[0.00021149]]
</code></pre>
<p>Here is my ipynb
<a href="https://github.com/wennycooper/hualien/blob/master/1010.ipynb">link text</a></p>
<p>original image // you can use mouse to get the xy values of feature points.
<a href="https://github.com/wennycooper/hualien/blob/master/1010.jpg">link text</a></p>
<p>feature points with gps locations // note that this is a resized diagram, the xy value is meanless.
<a href="https://github.com/wennycooper/hualien/blob/master/1010_with_gps_locations.png">link text</a>
the gps locations were provided by vendor and they claimed the errors should be <= 30cm...</p>
<p>To check if the gps locations is trustable or not, I tried to plot those gps locations to ROS rviz and check their relative locations to the labeled image. Finally, I think the gps locations is trustable,</p>
<p>here is the png for checking gps locations <a href="https://github.com/wennycooper/hualien/blob/master/gps_locations.png">link text</a></p>
<p>Any idea?</p>
http://answers.opencv.org/question/216500/more-accuracy-getaffinetransform/?comment=216584#post-id-216584hi, thanks for the reply. I've attached my images and code. Please help to identify the problem. Thank you!Mon, 05 Aug 2019 02:49:51 -0500http://answers.opencv.org/question/216500/more-accuracy-getaffinetransform/?comment=216584#post-id-216584Comment by Witek for <p>Hi all,</p>
<p>The getAffineTransform and invertAffineTransform output the transformation matrix with dtype='float64'.
Is there anyway to make it output more accuracy? say dtype='float128'?
I MAY need more accuracy in my applications.</p>
<p>In my application, I choose three points with gps locations and xy-points in the image to compute the matrix and I compute the inverse_matrix too.
Then, I use the same three points and the inverse_matrix, input the xy values and compute them back to the gps locations.
The largest error between the computed gps location and measured gps location is about <br>
3.04283202e-06 which is not really bad. (about 30cm )</p>
<pre><code>M2 = np.array([51, 788, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2)
# p = inv_M * p'
diff = result - np.array([23.90368083, 121.53650361]).reshape(2,1)
print diff
[[-2.86084081e-08]
[ 3.04283202e-06]]
</code></pre>
<p>But for other test points, the errors are too much.
For example, let's see the bottom point (gps 23.90377194, 121.53645972).
The error is 0.00021149 in longitude which is too much (<strong>about 21 meters</strong>). </p>
<pre><code>M2 = np.array([910, 958, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2) # p = inv_M * p'
diff = result - np.array([23.90377194, 121.53645972]).reshape(2,1) //compare to the measured gps location
print diff
[[0.00015057]
[0.00021149]]
</code></pre>
<p>Here is my ipynb
<a href="https://github.com/wennycooper/hualien/blob/master/1010.ipynb">link text</a></p>
<p>original image // you can use mouse to get the xy values of feature points.
<a href="https://github.com/wennycooper/hualien/blob/master/1010.jpg">link text</a></p>
<p>feature points with gps locations // note that this is a resized diagram, the xy value is meanless.
<a href="https://github.com/wennycooper/hualien/blob/master/1010_with_gps_locations.png">link text</a>
the gps locations were provided by vendor and they claimed the errors should be <= 30cm...</p>
<p>To check if the gps locations is trustable or not, I tried to plot those gps locations to ROS rviz and check their relative locations to the labeled image. Finally, I think the gps locations is trustable,</p>
<p>here is the png for checking gps locations <a href="https://github.com/wennycooper/hualien/blob/master/gps_locations.png">link text</a></p>
<p>Any idea?</p>
http://answers.opencv.org/question/216500/more-accuracy-getaffinetransform/?comment=216530#post-id-216530I think the main source of error could be the image and the precision of pinpointing the three points. Are you using some sort of image processing to get subpixel accuracy or just mark three points with a mouse or something?Fri, 02 Aug 2019 16:02:25 -0500http://answers.opencv.org/question/216500/more-accuracy-getaffinetransform/?comment=216530#post-id-216530Comment by LBerger for <p>Hi all,</p>
<p>The getAffineTransform and invertAffineTransform output the transformation matrix with dtype='float64'.
Is there anyway to make it output more accuracy? say dtype='float128'?
I MAY need more accuracy in my applications.</p>
<p>In my application, I choose three points with gps locations and xy-points in the image to compute the matrix and I compute the inverse_matrix too.
Then, I use the same three points and the inverse_matrix, input the xy values and compute them back to the gps locations.
The largest error between the computed gps location and measured gps location is about <br>
3.04283202e-06 which is not really bad. (about 30cm )</p>
<pre><code>M2 = np.array([51, 788, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2)
# p = inv_M * p'
diff = result - np.array([23.90368083, 121.53650361]).reshape(2,1)
print diff
[[-2.86084081e-08]
[ 3.04283202e-06]]
</code></pre>
<p>But for other test points, the errors are too much.
For example, let's see the bottom point (gps 23.90377194, 121.53645972).
The error is 0.00021149 in longitude which is too much (<strong>about 21 meters</strong>). </p>
<pre><code>M2 = np.array([910, 958, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2) # p = inv_M * p'
diff = result - np.array([23.90377194, 121.53645972]).reshape(2,1) //compare to the measured gps location
print diff
[[0.00015057]
[0.00021149]]
</code></pre>
<p>Here is my ipynb
<a href="https://github.com/wennycooper/hualien/blob/master/1010.ipynb">link text</a></p>
<p>original image // you can use mouse to get the xy values of feature points.
<a href="https://github.com/wennycooper/hualien/blob/master/1010.jpg">link text</a></p>
<p>feature points with gps locations // note that this is a resized diagram, the xy value is meanless.
<a href="https://github.com/wennycooper/hualien/blob/master/1010_with_gps_locations.png">link text</a>
the gps locations were provided by vendor and they claimed the errors should be <= 30cm...</p>
<p>To check if the gps locations is trustable or not, I tried to plot those gps locations to ROS rviz and check their relative locations to the labeled image. Finally, I think the gps locations is trustable,</p>
<p>here is the png for checking gps locations <a href="https://github.com/wennycooper/hualien/blob/master/gps_locations.png">link text</a></p>
<p>Any idea?</p>
http://answers.opencv.org/question/216500/more-accuracy-getaffinetransform/?comment=216503#post-id-216503I don't understand float64 = 15 digits. with 1e6 km (7 digits) precision is micrometer. With modern GPS when you get 10cm error it's really goodFri, 02 Aug 2019 04:19:16 -0500http://answers.opencv.org/question/216500/more-accuracy-getaffinetransform/?comment=216503#post-id-216503Answer by Witek for <p>Hi all,</p>
<p>The getAffineTransform and invertAffineTransform output the transformation matrix with dtype='float64'.
Is there anyway to make it output more accuracy? say dtype='float128'?
I MAY need more accuracy in my applications.</p>
<p>In my application, I choose three points with gps locations and xy-points in the image to compute the matrix and I compute the inverse_matrix too.
Then, I use the same three points and the inverse_matrix, input the xy values and compute them back to the gps locations.
The largest error between the computed gps location and measured gps location is about <br>
3.04283202e-06 which is not really bad. (about 30cm )</p>
<pre><code>M2 = np.array([51, 788, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2)
# p = inv_M * p'
diff = result - np.array([23.90368083, 121.53650361]).reshape(2,1)
print diff
[[-2.86084081e-08]
[ 3.04283202e-06]]
</code></pre>
<p>But for other test points, the errors are too much.
For example, let's see the bottom point (gps 23.90377194, 121.53645972).
The error is 0.00021149 in longitude which is too much (<strong>about 21 meters</strong>). </p>
<pre><code>M2 = np.array([910, 958, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2) # p = inv_M * p'
diff = result - np.array([23.90377194, 121.53645972]).reshape(2,1) //compare to the measured gps location
print diff
[[0.00015057]
[0.00021149]]
</code></pre>
<p>Here is my ipynb
<a href="https://github.com/wennycooper/hualien/blob/master/1010.ipynb">link text</a></p>
<p>original image // you can use mouse to get the xy values of feature points.
<a href="https://github.com/wennycooper/hualien/blob/master/1010.jpg">link text</a></p>
<p>feature points with gps locations // note that this is a resized diagram, the xy value is meanless.
<a href="https://github.com/wennycooper/hualien/blob/master/1010_with_gps_locations.png">link text</a>
the gps locations were provided by vendor and they claimed the errors should be <= 30cm...</p>
<p>To check if the gps locations is trustable or not, I tried to plot those gps locations to ROS rviz and check their relative locations to the labeled image. Finally, I think the gps locations is trustable,</p>
<p>here is the png for checking gps locations <a href="https://github.com/wennycooper/hualien/blob/master/gps_locations.png">link text</a></p>
<p>Any idea?</p>
http://answers.opencv.org/question/216500/more-accuracy-getaffinetransform/?answer=216589#post-id-216589So it seems that your goal is to get the GPS (or real-world metric) coordinates from pixel coordinates based on 3-point affine transform from a lens distorted image that includes a perspective view? Well, this is not going to work. You need to undistort your image first and then use at least 4 points to get the persepctive transform or even better, use all points that you have and findhomography (assuming the scene/road is flat). You probably do not have your camera model and probably cannot calibrate it in a standard way using a chessboard as you probably have no free access to the camera. In such a case you may use the Tsai method http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/DIAS1/ with coplanar data points. Unfortunately I have never done the calibration part yet, so I cannot help you further with it now (I have a very similar problem and need to undistort a street scene, but I just seem not to be able to find time to do it).
Mon, 05 Aug 2019 05:33:21 -0500http://answers.opencv.org/question/216500/more-accuracy-getaffinetransform/?answer=216589#post-id-216589Comment by Witek for <p>So it seems that your goal is to get the GPS (or real-world metric) coordinates from pixel coordinates based on 3-point affine transform from a lens distorted image that includes a perspective view? Well, this is not going to work. You need to undistort your image first and then use at least 4 points to get the persepctive transform or even better, use all points that you have and findhomography (assuming the scene/road is flat). You probably do not have your camera model and probably cannot calibrate it in a standard way using a chessboard as you probably have no free access to the camera. In such a case you may use the Tsai method <a href="http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/DIAS1/">http://homepages.inf.ed.ac.uk/rbf/CVo...</a> with coplanar data points. Unfortunately I have never done the calibration part yet, so I cannot help you further with it now (I have a very similar problem and need to undistort a street scene, but I just seem not to be able to find time to do it).</p>
http://answers.opencv.org/question/216500/more-accuracy-getaffinetransform/?comment=216595#post-id-216595If there is someone here that can help with camera calibration from a single planar view, that would be nice.Mon, 05 Aug 2019 09:28:22 -0500http://answers.opencv.org/question/216500/more-accuracy-getaffinetransform/?comment=216595#post-id-216595