2017-01-25 02:19:18 -0600 | received badge | ● Popular Question (source) |
2016-01-12 08:00:13 -0600 | received badge | ● Nice Answer (source) |
2013-09-02 11:13:13 -0600 | commented answer | ELSE runs even after IF runs! Don't debug in release mode and expect it to be 100% accurate! Optimization can confuse the debugger, leading to things you see in your video. Disable optimization in release mode and see if it still happens. |
2013-08-08 05:27:18 -0600 | commented question | ORB detector octaves Which size do the images have? |
2013-08-06 01:29:55 -0600 | answered a question | Is SURF algorithm used in OPENCV patented? Yes it is patented, that's why it's in the nonfree module. To use it commercially, you have to contact the patent holders. To be honest, I don't understand why everyone still uses SIFT/SURF when there are better alternatives in OpenCV (BRISK or FREAK for example). |
2013-08-05 09:38:27 -0600 | answered a question | opencv v2.4.6 video capture Seems like you didn't really try. |
2013-08-05 06:29:31 -0600 | answered a question | Оverlapping rectangles I doubt there's something in OpenCV for that, but why don't you do it yourself? It's fairly trivial: Find overlapping rectangles and for each overlap, select the one with the higher weight. |
2013-08-02 08:49:44 -0600 | commented question | About Installation Are you sure that you have knowledge in programming? If you can't work with your IDE or add some libraries to your projects, I doubt that you will even be able to use OpenCV at all. |
2013-07-31 04:34:32 -0600 | commented question | OpenCV Fuzzy Based Split and Merge (FBSM) You look at what the authors did and then implement that as an algorithm. What do you want from us? It seems you need to represent fuzzy logic in your program, so that's one thing. |
2013-07-29 13:14:53 -0600 | answered a question | Best way to detect that eyes are closed. Matchtemplate/Crosscorrelation isn't that great when you have black/white images, grayscale is better for it. In your case, I see a big differene in the images of open/closed eyes: A closed eye has the form of a line, while an open eye is more or less square. What you can do is count the number of rows and columns that have black pixels in them. For the open eye, you get for example 12 rows and 15 columns. Divide the smaller by the bigger number and you get a quotient that will tell you how square the eye is. For your closed eye, that value should be near 1. On the other hand, the closed eye has for example 5 rows and 17 columns, which gives you the quotient 0.29. So you just have to find a threshold that you use to tell if an image is closed or not. |
2013-07-25 10:29:59 -0600 | commented question | Object recognition on opencv4android Sorry, never used BOW/SVM. |
2013-07-25 09:24:25 -0600 | commented question | Object recognition on opencv4android The multi probe level defines how many neighbouring buckets (Finding the nearest neighbours is done using hash tables) are searched to find the nearest neighbours (the descriptions with the lowest distance to the query description). Of course the matching is poor. Simply computing the nearest neighbours isn't enough to get reliable results. For that, you have to use the tests and epipolar geometry I wrote about in my other comment. But there's not much to be done, the code is all in the RobustMatcher class in the OpenCV book. |
2013-07-25 05:42:18 -0600 | answered a question | how works bruteforcematcher? See this. It's to find the matches from keypoint descriptions by brute-force comparing the descriptions in the first set to the ones in the second set. |
2013-07-25 05:20:27 -0600 | commented answer | Object recognition on opencv4android Aww, too bad it's only in russian. |
2013-07-25 04:19:40 -0600 | commented answer | Object recognition on opencv4android Which article? |
2013-07-25 04:19:40 -0600 | received badge | ● Commentator |
2013-07-25 03:08:22 -0600 | commented answer | Object recognition on opencv4android You don't seem to know how scale invariance works. Look at David Lowe's SIFT. It is done by image pyramids. With your image sizes, you can only reliably cover 2 octaves before the images get too small to be able to compute more than 5 or 10 keypoints. Plus, the algorithms to compute the keypoint descriptions already use blurring to reduce noise, so an extra blurring is not necessary at all! |
2013-07-25 03:02:06 -0600 | commented question | Object recognition on opencv4android For LSHIndexparams, I found that for my matching project the params 20, 15, 0 (hashtable size, key size, multi probe level) work best. For your case, do you use the epipolar geometry to verify your matching results? Because simply finding the nearest neighbours only yields good results if your images are nearly identical. You need some other tests (quotient of 2 nearest neighbours, symmetry test) and RANSAC with epipolar geometry to filter out bad matches. Code examples for these tests are in the OpenCV cookbook (can be downloaded for free as pdf), search for the RobustMatcher class in the book. |
2013-07-25 02:54:06 -0600 | commented question | Angle and Scale Invariant Template matching Tell us about the specific errors that you get. Also, it would help if you had correct indentation! The lines in the for loops are all over the place, makes reading the code harder than necessary. |
2013-07-24 09:12:41 -0600 | commented question | Object recognition on opencv4android About the time for FLANN: Change the third parameter, the multi-scale-level. OpenCV recommends 2, but in my tests I found that 0 while finding less(not much, maybe 5%) correct matches provides a significant speedup to the matching process. |
2013-07-24 09:09:48 -0600 | commented answer | Object recognition on opencv4android What is your basis on saying that they work better? If your images have a scale factor, then by using low res images you lose scale invariance! Which means that matching will be worse! |
2013-07-24 04:16:21 -0600 | commented answer | faster X/Y matrix creation Then how do you expect to make matrix initialization faster? It is a process that has to be done for every element. But judging from the other comments, you don't have to create the matrix every time you call perspective_to_maps. Just create it before you (repeatedly) call the method and pass it as an argument, since that matrix never gets changed. |
2013-07-24 01:46:50 -0600 | answered a question | I have two images, one body image and other is shirt image. how can i scale shirt image according to body image (size) and place it on body? (images are attached) help me in opencv c++! Compute the pixel dimensions of the body and of the shirt. If you have those, it's trivial to scale the image to the pixel dimensions of the shirt. Then just put the image on the body. If you want it to look like the shirt is really on the body, you need to research that yourself. |
2013-07-23 16:03:07 -0600 | answered a question | A way to replace cvGet2D( distribution, y, x ) Scalar is a 4 element So to make your own cvGet2D, you can create your Scalar like this: (btw. Mat::at is row first, column last. Just fyi because |
2013-07-23 08:57:44 -0600 | answered a question | faster X/Y matrix creation If you use Visual Studio 2010 or higher, you can use the PPL-library. You then have access to parallel loops, in this case you should take a look at parallel_for. That way, you can parallelize the initialization of the matrix. If you don't use VS, there are other libraries around that provide parallelization (Intel TBB etc.) If you use parallelization, you have to change the way you set the matrix values, since it's random in which order the elements get accessed. |
2013-07-18 23:44:27 -0600 | answered a question | opencv_core245d.dll cannot be read by vs12 Of course it gives a PDB file missing error, because they aren't provided with the pre-compiled binaries. But that's not an error, only a warning. You simply don't have debug information for those dlls. So don't worry. You can copy those specific dlls to the project folder. |
2013-07-14 16:35:17 -0600 | answered a question | Problem accessing Mat Don't mix C++ and C! I'm getting nightmares seeing all that mixup... First search if OpenCV provides functionality that you want to use. In this case, look at FileStorage. |
2013-07-13 16:02:43 -0600 | commented question | Database(DBMS) for Computer Vision Yeah sure. Normally you pay consults for those kinds of opinion without any thoughts of your own. Ask us specific questions (and not just "what is the best RDMBS for finding matches") and we will answer. |
2013-07-11 12:29:11 -0600 | commented answer | Frequency of a sine wave Could you give an example of your image? |
2013-07-11 11:52:23 -0600 | commented question | How to check opencv libs in image? Again. The libraries are not included in the image! An image can be created using OpenCV, but unless the user put some kind of info into the metadata of the file, there's no way to know if an image was created with OpenCV. |
2013-07-11 11:50:13 -0600 | answered a question | Frequency of a sine wave If your image is more or less black and white, getting the peaks is fairly easy. Write an algorithm that follows the sine wave and you get its pixel positions. Computing the peaks is then simply done by comparing the positions if the x or y value (column/row) starts decreasing instead of increasing and vice versa. I don't how how thick your wave is, but by skeletonization, you can get a one pixel wide line which would be easier to check for deformities and stuff. I don't have much time atm, so can't offer you any code, but I hope that I could have been of help. |
2013-07-11 07:29:10 -0600 | commented question | How to check opencv libs in image? OpenCV does not get included in images, it is a library for (e.g.) image processing. |
2013-07-11 07:27:31 -0600 | commented question | Template matching using image processing Well, there are tutorials for template matching with OpenCV. We're here to answer specific questions. About the language: Use C++. |
2013-07-07 04:01:00 -0600 | commented question | Cannot open include file: 'cxtypes.h': No such file or directory Great tag you got there. Anyway, you solve errors of the sort "no such file or directory" by adding said file to the known include directories. |
2013-07-06 09:10:50 -0600 | commented answer | entry to professional programming At http://docs.opencv.org/doc/tutorials/introduction/windows_install/windows_install.html#windowssetpathandenviromentvariable it says that you have to use "\OpenCV\Build\x86\vc10" so do that ;) you downloaded the right thing. |
2013-07-06 04:21:59 -0600 | received badge | ● Citizen Patrol (source) |
2013-07-06 03:23:49 -0600 | answered a question | entry to professional programming |
2013-07-06 03:17:51 -0600 | answered a question | Color correction That's what histogram equalization is for. Here is a link for global histogram equalization and at here is an example with a local histogram equalization method. If you want to do it on separate image channels (e.g. RGB) use split to get the channels and apply the equalization on each channel. Then use merge() to make an image out of the changed channels. |
2013-07-05 10:24:19 -0600 | commented question | entry to professional programming What knowledge do you have about pattern recognition / image processing? Face recognition is very complex. You can get results using OpenCV, but simply by following a tutorial, you won't really learn much. And without knowledge about C++, this will be even hard. It seems that doing face recognition might be over the top for you at this point. Not to mention, you didn't even mention the errors you got, just that you have them. Which won't tell us anything at all. |
2013-07-05 05:03:44 -0600 | commented answer | About CV_SWAP macro Look at the authors name on SO and the time he created the topic ;) it's the same person. |
2013-07-03 08:23:34 -0600 | commented answer | Calculating the area of Bounding Box Let's continue this in your topic. |
2013-07-01 10:41:19 -0600 | commented answer | Calculating the area of Bounding Box If you have the position of your bbox in the image, the width is just the difference between the highest and lowest x-coordinate. And the height the y-coordinate difference. |
2013-07-01 09:28:13 -0600 | commented answer | Calculating the area of Bounding Box ... And what is your problem? You can't compute the center or size of a bounding box? |
2013-07-01 03:14:53 -0600 | commented question | Object Identification in 2D image How are you going to get depth information from only one image? And what has intensity to do with depth? If I hold an object 5cm and 25cm from a camera, it doesn't get darker or brighter (unless there's a bright light source shining onto it..) |
2013-06-27 12:42:24 -0600 | commented answer | [VideoCapture::open][Qt Creator]Debug Assertion Failed (unsigned)(c+1) <= 256 There is no isctype.c in OpenCV (at least not in 2.4.5), so I have no idea what the problem even is exactly, just that c is the cause. I told you to use the release lib ONLY in the release build and the debug ONLY in debug build. Don't mix it! Please answer my question about the videocapture class and the provided image file. And lastly, show us the relevant code that produces this behaviour! Without seeing your code, how could we even think about finding the error? |
2013-06-27 09:18:41 -0600 | answered a question | [VideoCapture::open][Qt Creator]Debug Assertion Failed (unsigned)(c+1) <= 256 LIBS += C:/C/opencv/build/x86/vc10/lib/opencv_highgui240d.lib LIBS += C:/C/opencv/build/x86/vc10/lib/opencv_highgui240.lib ^ Don't link debug AND release version in the same build. Use the *d.lib for debug and *.lib for release. I guess that this is the problem. And why do you open an image file with the videocapture class? |
2013-06-27 09:13:00 -0600 | answered a question | How to create an OpenCV application that runs without OpenCV Manager As you can see at OpenCV documentation it seems to be a necessity. Otherwise, every application that uses OpenCV on a mobile platform would have to provide its own OpenCV libraries, which is just bad. |