2020-08-05 04:17:30 -0600 | received badge | ● Notable Question (source) |
2019-03-14 04:08:29 -0600 | received badge | ● Popular Question (source) |
2017-10-21 15:47:06 -0600 | commented answer | Weird words in my bag Thanks, this makes perfect sense for me. I tried with KAZE and it looks much better. |
2017-10-21 15:45:17 -0600 | marked best answer | Weird words in my bag Hello, I'm using BOWKMeansTrainer to cluster ORB keypoints with FREAK descriptors. This combination of detector and matcher show good results for drawing-style images, which I trying to analyze, however BOWKMeansTrainer makes some problems... After training, when I process an image with BOWImgDescriptorExtractor and get its image descriptor, I see that often words (clusters) consist of keypoints, which are too far from each other. Here are highlighted keypoints of one of the words. Of course there are other words, which are localized enough, but I'm curious why it happens when they're not localized? |
2017-10-21 08:45:27 -0600 | received badge | ● Student (source) |
2017-10-21 06:41:34 -0600 | asked a question | Weird words in my bag Weird words in my bag Hello, I'm using BOWKMeansTrainer to cluster ORB keypoints with FREAK descriptors. This combinatio |
2017-04-04 03:33:05 -0600 | commented question | Turning ArUco marker in parallel with camera plane I switched to use ARUCO Board and it improved accuracy a lot. findHomography() and getPerspectiveTransform() provide the following result for me. |
2017-03-31 10:28:43 -0600 | commented question | Turning ArUco marker in parallel with camera plane Yes, I plan to switch to subpixel accuracy too. Just wanted first to make sure that I'm not going in wrong direction by not calculating desired coordinates from vectors. I didn't check the homography topic yet though... I believe it will give me more understanding. |
2017-03-31 07:20:58 -0600 | commented question | Turning ArUco marker in parallel with camera plane I added source images and corner coordinates. Will read more on homography. Thanks! |
2017-03-30 04:24:14 -0600 | commented question | Weird ArUco marker behavior It looks like these could help, I'll try to play with them after I solve the problem which most likely gives the most distortion for me - http://answers.opencv.org/question/13... |
2017-03-30 04:19:28 -0600 | asked a question | Turning ArUco marker in parallel with camera plane I need to warp the image to fix its perspective distortion based on detected marker. In other words - to get the plane where the marker lays become parallel to the camera plane. In general it works for me, when I simply map points of perspective-distorted marker to its orthogonal position (Sketch) with getPerspectiveTranfrorm() and then warpPerspective(), which warps whole image: The following are sample params for getPerspectiveTransform() The result looks OK, but not always, so I think that this way is wrong. My assumption that since for detected marker I can get its pose estimation (which shows its relation to camera) I can calculate required marker position (or camera position?) using marker points and rotation/translation vectors. Now I'm stuck basically not understanding the math solution. Could you advise? UPDATE The following is a source image with detected markers. The white circles represent the desired position of marker that will be used in getPerspectiveTransform().
The following is the result image, which is still distorted: |
2017-03-29 15:36:54 -0600 | commented answer | Aruco - draw position+orientation relative to marker Hi, unfortunately the link in answer doesn't follow to any function now. Could you post function name or, maybe, code that worked for you? |
2017-03-29 00:43:17 -0600 | commented question | Weird ArUco marker behavior Yeah... that's blurring makes such result, just tested with the same size, just blurred image. Now wondering how to fix this situation, because in real photo there is always some portion of blurring. |
2017-03-29 00:39:28 -0600 | commented question | Weird ArUco marker behavior Only detection box. Here are both source images: https://www.dropbox.com/s/mbvtjvc5pl3... |
2017-03-28 22:54:11 -0600 | commented question | Weird ArUco marker behavior The second image is a result (on the right). 1.png ans 2.png are two attempts, where 2.png is just a little bit smaller. I just got idea to play with detection params. |
2017-03-28 15:35:38 -0600 | received badge | ● Editor (source) |
2017-03-28 15:34:43 -0600 | asked a question | Weird ArUco marker behavior It is the first time I worked with ArUco markers and was disappointed by weird distortion after perspective transformation based on recognized markers. I assumed that its because of camera calibration errors and decided to test with "ideal" markers. I calibrated camera with pattern file (not printed and photographed, but just from a source file). Then I generated my marker and put it on JPG image without any distortions. My program found the marker, I generated new position for marked and warped the image: That worked OK: 1.PNG Then I minimized source image with one pyrDown call and tried to recognize marker again: Could you please explain, why this weird distortion happens and how I can avoid it in real situation with photo or video stream? UPDATE: On the left is a source image, on the right - a destination. I find marker on both to make sure that code above turns source image in orthogonal projection to camera. I assume I should use vectors that I receive from estimatePoseSingleMarkers here, but for now I'm just confused with distortion of markers detected in source image. |
2016-09-20 08:02:47 -0600 | received badge | ● Enthusiast |
2016-09-14 14:07:27 -0600 | received badge | ● Supporter (source) |
2016-09-14 14:07:25 -0600 | received badge | ● Scholar (source) |
2016-09-14 14:07:20 -0600 | commented answer | Mat rotation center Thanks! I expected that rotation will be centered in resulting mat, but now I see it couldn't without explicitly specified offset: |
2016-09-14 05:04:46 -0600 | asked a question | Mat rotation center Hi, I'm rotating the 10x1 image by specifying center at 5,0: [255, 255, 255, 255, 255, 255, 255, 255, 255, 255] I assume that result will be the vertical line at center of 10x10 Mat, but it is shifted and cropped to the top: I'm trying to understand why it happens? |
2016-07-17 06:44:52 -0600 | asked a question | Understanding of planes in NAryMatIterator I have 3-dimension matrix: Now I'd like to iterate its planes and expect that it should be three two-dimensional matrices: I expect to get output "3, 5, 5", but instead I get "1, 1, 125". Where is the slice of matrix? |