2020-11-05 14:51:44 -0600 | received badge | ● Nice Question (source) |
2020-10-22 03:11:49 -0600 | received badge | ● Popular Question (source) |
2020-10-21 22:00:18 -0600 | received badge | ● Popular Question (source) |
2020-02-18 14:28:09 -0600 | received badge | ● Notable Question (source) |
2019-09-24 08:59:50 -0600 | received badge | ● Famous Question (source) |
2019-07-12 07:05:50 -0600 | received badge | ● Notable Question (source) |
2019-06-06 07:06:23 -0600 | received badge | ● Popular Question (source) |
2018-02-11 18:57:06 -0600 | commented question | OpenCVjs with Features2D A little further forward here. 1. Enable the build flag: https://github.com/opencv/opencv/blob/master/platforms/js |
2018-02-07 21:01:30 -0600 | edited question | OpenCVjs with Features2D OpenCVjs with Features2D Hello, I am trying to build the OpenCV js bindings however, i think this may in fact be a cmak |
2018-02-07 21:00:38 -0600 | asked a question | OpenCVjs with Features2D OpenCVjs with Features2D Hello, I am trying to build the OpenCV js bindings however, i think this may in fact be a cmak |
2018-01-07 18:11:57 -0600 | received badge | ● Critic (source) |
2018-01-07 17:41:52 -0600 | commented answer | WarpPerspective Advice with correct BBox Pixels Thank you for your input, but this doesn't quite address the issue i am facing. Also src_vertices is the wrong way aroun |
2018-01-07 16:22:38 -0600 | edited question | WarpPerspective Advice with correct BBox Pixels WarpPerspective Advice with correct BBox Pixels Hello, I am trying to do some template matching with OpenCV. The templa |
2018-01-07 16:02:54 -0600 | edited question | WarpPerspective Advice with correct BBox Pixels WarpPerspective Hello, I am trying to do some template matching with OpenCV. The templates in the new image could be wa |
2018-01-07 16:01:18 -0600 | edited question | WarpPerspective Advice with correct BBox Pixels WarpPerspective Hello, I am trying to do some template matching with OpenCV. The templates in the new image could be wa |
2018-01-07 15:59:32 -0600 | edited question | WarpPerspective Advice with correct BBox Pixels WarpPerspective Hello, I am trying to do some template matching with OpenCV. The templates in the new image could be wa |
2018-01-07 15:58:43 -0600 | asked a question | WarpPerspective Advice with correct BBox Pixels WarpPerspective Hello, I am trying to do some template matching with OpenCV. The templates in the new image could be wa |
2017-11-30 15:19:39 -0600 | asked a question | Create Templates from Feature Homography Create Templates from Feature Homography Hello, Currently i am working on texture tracking. Presently, i have extracte |
2017-11-02 01:14:18 -0600 | received badge | ● Popular Question (source) |
2017-06-15 01:43:24 -0600 | received badge | ● Notable Question (source) |
2017-05-17 18:46:15 -0600 | received badge | ● Notable Question (source) |
2017-04-05 07:51:45 -0600 | received badge | ● Popular Question (source) |
2017-04-05 00:53:48 -0600 | commented question | Why does Haar cascade classifier performance change when I crop an image? The cropped image size should be an octave of the original size. The scaling will affect which window sizes the detection occurs at. Perhaps when you cropped the image, the particular scale at which your object is detected is not used and is skipped. Make the scale increments smaller, which will affect performance. |
2017-04-05 00:52:05 -0600 | asked a question | Reverse Engineer Features Hello, Is it possible to reverse engineer an image feature? Given a feature2D detection/description method (SIFT, SURF, FREAK, AKAZE etc) is it possible to create image features that are likely to be detected in an image? I want to create an alphabet of features. I don't think BOW is quite right here, but the usage of a vocabulary may be necessary. Let's say we have 10 images, and we want to add a sticker with one of our features on it. We can print out these images onto giant pieces of cardboard and move them in front of the camera. When shown to a camera, the feature detector/descriptor/matcher will be able to tell which image is current in view of the camera, despite it's scale/translation/rotation very quickly. I know QR codes are probably better for the scenario i am describing, however, QR codes are not viable. I just want one giant image feature that can be easily matched. Is there such a method to know all possible features for a detector/descriptor/matcher ahead of time? And in particular, the ones that will match well. i.e. SURF is a 9x9 patch, so possibly create a large image say 900 x 900, then according to a 10x10 grid on this surface, we could colour squares making a detectable feature. Please ask for clarification on any points here...... UPDATE: Found a paper for Maximum Detector Response Markers for SIFT and SURF |
2016-10-11 02:37:25 -0600 | marked best answer | World Co-ordinates and Object Co-ordinates Hello, I am working with cv::SolvePnp(), cv::ProjectPoints(). We are working with a fully calibrated camera with known camera matrix and distortion coefficients. Given a detected marker, it is possible to get the rvec and tvec for a given 3D model. This has been done for two types of model, a board model and a ball model. We then get three sets of tvecs/rvecs, one for the board and then two more models for the balls. As shown below..... How do we relate these? We can project the model points into the image usings the result of solvepnp. How do the rvecs and tvecs relate in this case? Is it possible to get the location of each ball on the board in terms of it's x,y,z location relative to the board model? The board is shaped as (0,0), (1,0), (1,1), (0,1). The ball are circles centered at 0, with radius 0.1, which is in scale with the real world objects. Process so far....
Can we get more information on where the balls are on the board? The ultimate aim is collision detection/prediction once locations and velocity are determined. Kind regards, Daniel |
2016-09-06 13:19:28 -0600 | received badge | ● Popular Question (source) |
2016-08-18 10:21:31 -0600 | commented question | BOWKMeansTrainer Max Images? @berak. I will investigate in the morning. :) |
2016-08-18 07:30:05 -0600 | asked a question | BOWKMeansTrainer Max Images? Hello. I am training BOWKMeansTrainer with a dataset of 30,000 images using AKAZE features and descriptors. When i hit the 3500 sample on call int BowTrainer.add(descriptor), i receive an error. My machine has 16Gb of RAM and over 9Gb is available at the time of the error. Project is VS2015 with x64 configuration. The number of clusters is set to 1000 initially, then 400 and finally 16. All yielding the same result. Do i need more? All cv::Mat created in the loop are released correctly. What is the limit to the number of images that can be trained? Is there a way to use multiple BOW trainers if i had sub-classes of images? Is this a limitation with debug versus release dlls for memory allocation?
883200 bytes = 0.0008832 Gb?? 9Gb available..... Am i taking crazy pills here? Regards, |
2016-08-17 07:12:32 -0600 | asked a question | Using ifstream in VS2015 Hello, I am calling a function from OpenCV that uses ifstream in C++. ifstream doesn't seem to work when tried in isolation. But getting..... OpenCV Error: Bad argument (Default classifier file not found!) in cv::text::ERClassifierNM1::ERClassifierNM1, file ~\opencv_contrib-3.1.0\modules\text\src\erfilter.cpp, line 1039 The offending line is..... Everything worked in VS2013 but i have to switch to VS2015. I cannot get ifstream to work at all, even though the file is located in the .exe folder itsself or project directory. Everywhere really. It's like ifstream has stopped working during the switch to VS2015 https://github.com/opencv/opencv_cont... I have also tried the full path and adding files to VS2015 project resources folder. |
2016-08-08 16:30:55 -0600 | commented question | Matrix Multiplication Values Not Correct @LorenaGdL I've updated the numbers and i am still confused,. Any clues? |
2016-08-08 16:30:15 -0600 | received badge | ● Associate Editor (source) |
2016-08-08 10:41:19 -0600 | commented question | Matrix Multiplication Values Not Correct Nah, this one..... https://github.com/ucisysarch/opencvjs @berak |
2016-08-08 10:14:08 -0600 | asked a question | Matrix Multiplication Values Not Correct Matrix multiplication..... So i am converting some Opencv code to Opencvjs and i cannot get the correct matrix multiplication result. In C++ which appears to work. cv::Mat translation = -rotation * S; Print out of each matrix. (std::cout << cv::Mat() << std::endl)
Print out of the result.
Then verification via an online Matrix Multiplication tool gives a different answer, as well as opencvjs using GEMM. Is the cv::Mat * (multiplication) operator the same as GEMM? This is the opencvjs code i expected to give the same result, but the online matrix calculator gives yet another result. cv.gemm(rotation, S, -1, emptyMat, 0, translation, 0); I am missing something obvious, i think. UPDATE: EDIT ADDED CORRECT VALUES. The answers is still incorrect....... can somebody please tell me what is METHODOLOGICALLY WRONG WITH WHAT I AM DOING HERE. Am i taking crazy pills here? |
2016-07-14 06:11:59 -0600 | answered a question | Find Peaks in Histogram You want local maxima. The histogram is a mat, so you can get the value of each index. How you choose to do this is up to you. But a sliding window, where you have the previous value, current value and next value. If prev < current > next then you have a peak. That's a pretty crude approach so perhaps you may want to smooth or normalize you values first. |
2016-07-13 17:38:48 -0600 | commented question | Mock Camera Intrinsics @Tetragramm Nope. It's an arbitrary image from an unknown camera. Should work for all imagew, i.e. even downloaded from the internet etc. What if it was a drawing from photoshop etc? The camera intrinsics may not be available. Interesting problem, right? :) |
2016-07-13 17:04:12 -0600 | edited question | Mock Camera Intrinsics Hello, I am following this post an amazing blog post to create a Perspective Transform for an arbitrary image. https://jepsonsblog.blogspot.co.nz/20... This works well for square images, but has a problem with the aspect ratio when it comes to rectangular images. I suspect it is the Camera Intrinsics that uses the field of view. There are comments on the blog mentioning suggested values of f for 200-250 when using this method. In the documentation for OpenCV, it is a little more precise stating that it is in fact fx and fy. Focal length can be found if the field of view is known, and vice versa.
What is the solution here? Example image of the problem, look at how it is oddly stretched in the x axis which becomes more severe as the rotation increases. Works fine when they are square images. |
2016-07-13 08:42:07 -0600 | commented answer | How to detect weather the circle is green or black... Yup. Black should be (0,0,0) and Green should be ~(0,255,0). There are other tricks but that's the essential formula. To get good contours, you should convert to grayscale, and equalizeHist before you run FindContours. |
2016-07-13 08:39:31 -0600 | commented answer | Where is opencv_core310.dll ? Have you tried building from source? |
2016-07-13 08:39:21 -0600 | commented answer | Where is opencv_core310.dll ? Hmmm. It seems like it should be there...... http://docs.opencv.org/3.1.0/dc/d88/t... |