OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Fri, 02 Aug 2019 03:50:39 -0500more accuracy getAffineTransform()http://answers.opencv.org/question/216500/more-accuracy-getaffinetransform/Hi all,
The getAffineTransform and invertAffineTransform output the transformation matrix with dtype='float64'.
Is there anyway to make it output more accuracy? say dtype='float128'?
I MAY need more accuracy in my applications.
In my application, I choose three points with gps locations and xy-points in the image to compute the matrix and I compute the inverse_matrix too.
Then, I use the same three points and the inverse_matrix, input the xy values and compute them back to the gps locations.
The largest error between the computed gps location and measured gps location is about
3.04283202e-06 which is not really bad. (about 30cm )
M2 = np.array([51, 788, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2)
# p = inv_M * p'
diff = result - np.array([23.90368083, 121.53650361]).reshape(2,1)
print diff
[[-2.86084081e-08]
[ 3.04283202e-06]]
But for other test points, the errors are too much.
For example, let's see the bottom point (gps 23.90377194, 121.53645972).
The error is 0.00021149 in longitude which is too much (**about 21 meters**).
M2 = np.array([910, 958, 1.0])
M2 = M2.reshape(3, 1)
result = np.matmul(inv_M, M2) # p = inv_M * p'
diff = result - np.array([23.90377194, 121.53645972]).reshape(2,1) //compare to the measured gps location
print diff
[[0.00015057]
[0.00021149]]
Here is my ipynb
[link text](https://github.com/wennycooper/hualien/blob/master/1010.ipynb)
original image // you can use mouse to get the xy values of feature points.
[link text](https://github.com/wennycooper/hualien/blob/master/1010.jpg)
feature points with gps locations // note that this is a resized diagram, the xy value is meanless.
[link text](https://github.com/wennycooper/hualien/blob/master/1010_with_gps_locations.png)
the gps locations were provided by vendor and they claimed the errors should be <= 30cm...
To check if the gps locations is trustable or not, I tried to plot those gps locations to ROS rviz and check their relative locations to the labeled image. Finally, I think the gps locations is trustable,
here is the png for checking gps locations [link text](https://github.com/wennycooper/hualien/blob/master/gps_locations.png)
Any idea?Kevin KueiFri, 02 Aug 2019 03:50:39 -0500http://answers.opencv.org/question/216500/What is estimateGeometricTransform of Matlab in openCV?http://answers.opencv.org/question/184333/what-is-estimategeometrictransform-of-matlab-in-opencv/ Hello,
I am trying to rewrite the following code [Object Detection in a Cluttered Scene Using Point Feature Matching](https://in.mathworks.com/help/vision/examples/object-detection-in-a-cluttered-scene-using-point-feature-matching.html) in OpenCV using python.
It would be great if somebody could explain to me how the estimateGeometricTransform in the Matlab code works and is there any equivalent OpenCV command? I have seen people saying getAffineTransform is equivalent to estimateGeometricTransform, but I am not sure.
So far the code is python is:
import numpy as np
import cv2
# Read the templates
temp = cv2.imread('template.jpg')
# Show the templates
cv2.imshow("Template", temp)
# Read the input Image
inputImage = cv2.imread('Main.jpg')
# Show the input Image
cv2.imshow("Main Image",inputImage)
# Create SURF object.
surf = cv2.xfeatures2d.SURF_create(20000)
# Find keypoints and descriptors directly
kp1, des1 = surf.detectAndCompute(inputImage,None)
kp2, des2 = surf.detectAndCompute(tramTemplate,None)
print("Key points of an Input Image, Descriptors of an Input Image", len(kp1), len(des1))
print("Key points of Tram Template, Descriptors of Tram Template", len(kp2), len(des2))
#Detect feature points in both images.
inputImagePoint = cv2.drawKeypoints(inputImage,kp1,None,(255,0,0),4)
tramTemplatePoint = cv2.drawKeypoints(tramTemplate,kp2,None,(255,0,0),4)
cv2.imshow("Input Image Key Point", inputImagePoint)
cv2.imshow("Tram Template Key Point", tramTemplatePoint)
# Match the features using their descriptors.
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)
# Show Matched features
M = np.array(matches)
M1 = M[:, 0]
M2 = M[:, 1]
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
good.append([m])
matchedFeaures = cv2.drawMatchesKnn(inputImage,kp1,tramTemplate,kp2, good, None, flags=2)
cv2.imshow("Matched Feaures", matchedFeaures)
# Part of code is missing
aff = cv2.getAffineTransform(M1, M2)
cv2.imshow("Affine Transformed Image", aff)
# Get the bounding polygon of the reference image.
fromCenter = False
rectangleBox = cv2.selectROI(tramTemplate, fromCenter)
cv2.waitKey()
In the Matlab code, I don't understand what the following lines mean? Can somebody please explain it to me? It says "Display putatively matched features.", but I don't get it how.
matchedBoxPoints = boxPoints(boxPairs(:, 1), :);
matchedScenePoints = scenePoints(boxPairs(:, 2), :);
I am kinda stuck from this point. I believe that the variable "boxPoints" is a key feature and "boxPairs" is the matched feature using their descriptors, right?
Also, the getAffineTransform gives me an error: "src data type = 17 is not supported"
I kind of need it for my project
Thank you very much.Riya208Fri, 09 Feb 2018 13:29:37 -0600http://answers.opencv.org/question/184333/need a help on MatOfPoint2fhttp://answers.opencv.org/question/14068/need-a-help-on-matofpoint2f/
I am rewriting WarpAffine related code in Android. But I can hardly get how to use MatOfPoint2f.
My previous code is like: (using OpenCVSharp)
CvPoint2D32f[] src_pf = new CvPoint2D32f[3];
CvPoint2D32f[] dst_pf = new CvPoint2D32f[3];
src_pf[0] = new CvPoint2D32f(0,0);
src_pf[1] = new CvPoint2D32f(100,100);
src_pf[2] = new CvPoint2D32f(100,70);
dst_pf[0] = new CvPoint2D32f(0,0);
dst_pf[1] = new CvPoint2D32f(100,100);
dst_pf[2] = new CvPoint2D32f(200,70);
CvMat perspective_matrix = Cv.GetAffineTransform(src_pf, dst_pf);
Cv.WarpAffine(src, dst, perspective_matrix)
In Android, the code should look like this:
MatOfPoint2f src_pf = new MatOfPoint2f();
MatOfPoint2f dst_pf = new MatOfPoint2f();
//how do I set up the position numbers in MatOfPoint2f here?
Mat perspective_matrix = Imgproc.getAffineTransform(src_pf, dst_pf);
Imgproc.warpAffine(src, dst, perspective_matrix);
How do I setup the position numbers in MatOfPoint2f?
PapercutMon, 27 May 2013 17:10:19 -0500http://answers.opencv.org/question/14068/Image Registration by Manual marking of corresponding points using OpenCVhttp://answers.opencv.org/question/11796/image-registration-by-manual-marking-of-corresponding-points-using-opencv/ 1. I have a **processed binary image** of **dimension 300x300**. This processed image contains few object(person or vehicle).
![processed binary image][1]
2. I also have another **RGB image** of the same scene of **dimensiion 640x480**. It is **taken from a different position**
![enter image description here][2]
**note : both cameras are not the same**
I can detect objects to some extent in the first image using background subtraction. **I want to detect corresponding objects in the 2nd image**. I went through opencv functions
- [getAffineTransform][3]
- [getPerspectiveTransform][4]
- [findHomography][5]
- [estimateRigidTransform][6]
All these functions require corresponding points(coordinates) in two images
In the 1st binary image, I have only the information that an object is present,it does not have features exactly similar to second image(RGB).
I thought conventional feature matching to determine corresponding control points which could be used to estimate the transformation parameters is **not feasible** because I think I cannot determine and match features from binary and RGB image(am I right??).
If I am wrong, what features could I take, how should I proceed with Feature matching, find corresponding points, estimate the transformation parameters.
**The solution which I tried more of Manual marking to estimate transformation parameters**(please correct me if I am wrong)
**Note : There is no movement of both cameras.**
- Manually marked rectangles around objects in processed image(binary)
- Noted down the coordinates of the rectangles
- Manually marked rectangles around objects in 2nd RGB image
- Noted down the coordinates of the rectangles
- Repeated above steps for different samples of 1st binary and 2nd RGB images
Now that I have some 20 corresponding points, I used them in the function as :
> findHomography(src_pts, dst_pts, 0) ;
So once I detect an object in 1st image,
- I drew a bounding box around it,
- Transform the coordinates of the vertices using the above found transformation,
- finally draw a box in 2nd RGB image with **transformed coordinates as vertices**.
But this doesnt mark the box in 2nd RGB image exactly over the person/object. Instead it is drawn somewhere else. Though I take several sample images of binary and RGB and use several corresponding points to estimate the transformation parameters, it seems that they are not accurate enough..
What are the meaning of CV_RANSAC and CV_LMEDS option, ransacReprojecThreshold and how to use them?
Is my approach good...what should I modify/do to make the registration accurate?
Any alternative approach to be used?
[1]: http://i.stack.imgur.com/WEcMJ.jpg
[2]: http://i.stack.imgur.com/xxqDN.jpg
[3]: http://docs.opencv.org/modules/imgproc/doc/geometric_transformations.html#Mat%20getAffineTransform%28InputArray%20src,%20InputArray%20dst%29
[4]: http://docs.opencv.org/modules/imgproc/doc/geometric_transformations.html#Mat%20getPerspectiveTransform%28InputArray%20src,%20InputArray%20dst%29
[5]: http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#Mat%20findHomography%28InputArray%20srcPoints,%20InputArray%20dstPoints,%20int%20method,%20double%20ransacReprojThreshold,%20OutputArray%20mask%29
[6]: http://docs.opencv.org/modules/video/doc/motion_analysis_and_object_tracking.html#estimaterigidtransformKarthikTue, 16 Apr 2013 20:55:07 -0500http://answers.opencv.org/question/11796/Wrong matrix from getAffineTransformhttp://answers.opencv.org/question/11455/wrong-matrix-from-getaffinetransform/I need to use Affine Transformation algorithm in order to find a matrix (that describes the Af.Tr. between two images which I took 3 matching points from) and afterwards multiply it for some coordinates (that represent some points in the first frame) in order to find the respective points in the second.
I'm trying to experiment the OpenCV function getAffineTransform with 3 points that stay still.
this is my code:
Point2f terp1[]={Point2f(1,1),Point2f(30,1),Point2f(30,30)};
Point2f terp2[]={Point2f(1,1),Point2f(30,1),Point2f(30,30)};
Mat A_m( 2, 3, CV_32FC1 );
A_m = getAffineTransform(terp1,terp2);
PrintMatrix(A_m);
I expect the matrix to be:
1 0 0 / 0 1 0 .....but i recive (from PrintMatrix function) huge numbers that don't make sense as 2346027296 32673544 32900240 // 2346027296 32673544 32900240
why?
matteoFri, 12 Apr 2013 04:59:40 -0500http://answers.opencv.org/question/11455/GetPerspectiveTransform mathematical explanationhttp://answers.opencv.org/question/5018/getperspectivetransform-mathematical-explanation/Hi guys!
I'm doing a course project devoted to perspective transformations.
I have a ready opencv function GetPerspectiveTransform that takes my source array
CvPoint2D32f srcQuad[4] and gets destination array CvPoint2D32f dstQuad[4] using a matrix of perspective transformation CvMat* warp_matrix = cvCreateMat(3,3,CV_32FC1);
But is there a mathematical analogue of this function?
I mean can I using some mathematical formulas replace ready function GetPerspectiveTransform?
Thank you very much for any assistance. lindstormSun, 09 Dec 2012 03:07:20 -0600http://answers.opencv.org/question/5018/getAffineTransform, getPerspectiveTransform or findHomography?http://answers.opencv.org/question/4538/getaffinetransform-getperspectivetransform-or-findhomography/Following scenario: I've taken an photo for further analysis. The photo contains a sheet of paper. First of all I'm trying to detect the corners of the image. Once I've got them I want to stretch/transform the image so that its corners fit a new `Mat`'s corners. (Like as if I had scanned the image.)
Reading the documentation on the above mentioned functions I'm not quite sure what's right for my needs. `getAffineTransform` seems to only take three point pairs (which works quite well, but leaves the lower right corner untouched).
`getPerspectiveTransform` should use four point pairs and `findHomography` even more, right? So I guess that one of those would be the one I should go for. For now I did not manage to get it working, though. I'm using `vector<Point2f> sourcePoints, destinationPoints;`, fill them with the found corners and my calculated new points (which are basically `[width, 0]`, `[0, 0]`, `[0, height]` and `[width, height]` of the new `Mat`). After creating the two vectors I would create the transformation matrix using either `getPerspectiveTransform` or `findHomography` to finally pass it over to `warpPerspective`. The last step is the one that crashes my application with
`OpenCV Error: Assertion failed (dims == 2 && (size[0] == 1 || size[1] == 1 || size[0]*size[1] == 0)) in create, file /Users/Aziz/Documents/Projects/opencv_sources/trunk/modules/core/src/matrix.cpp, line 1310
libc++abi.dylib: terminate called throwing an exception`.
Since I'm not sure if it even is the right approach I'm trying, I would love to hear your opinion on this before I try to fix the error.
Thanks a lot!
–ffloheiSun, 25 Nov 2012 11:46:20 -0600http://answers.opencv.org/question/4538/