2019-01-17 10:16:50 -0600 | received badge | ● Popular Question (source) |
2018-09-28 11:01:54 -0600 | received badge | ● Popular Question (source) |
2016-03-13 22:00:06 -0600 | received badge | ● Good Answer (source) |
2016-03-13 22:00:06 -0600 | received badge | ● Enlightened (source) |
2016-01-26 13:28:07 -0600 | received badge | ● Necromancer (source) |
2015-10-23 06:16:31 -0600 | received badge | ● Nice Question (source) |
2015-10-05 22:53:45 -0600 | received badge | ● Nice Question (source) |
2015-09-30 07:49:53 -0600 | received badge | ● Nice Answer (source) |
2015-05-03 09:38:30 -0600 | received badge | ● Nice Question (source) |
2015-02-17 05:29:19 -0600 | commented question | How to get better results with OpenCV face recognition Module Try here: |
2015-01-04 07:04:22 -0600 | commented question | Adding rotation invariance to the BRIEF descriptor (contribution to OpenCV) Thanks @Guanta, I'll try ICIP ! |
2015-01-03 14:53:21 -0600 | commented question | Adding rotation invariance to the BRIEF descriptor (contribution to OpenCV) @Guanta, thanks for your comment! I'm in the process of making a pull request! Can you recommend a small conference where I can try to publish it? I thought it's way too minor to be published. Thanks! |
2015-01-02 16:16:26 -0600 | commented question | Adding rotation invariance to the BRIEF descriptor (contribution to OpenCV) Not exactly. ORB uses it's own mechanism for measuring the patch orientation and also uses unsupervised learning to select what the authors claim to be an optimal set of sampling pairs. I suggest to follow the original implementation of BRIEF (random sampling pairs), but to include rotation invariance using the keypoint detector's estimation of the patch's orientation, which proves to be superior over ORB's estimation of the patch's orientation. From experiments that I've conducted, SIFT detector coupled with the Rotation Invariant BRIEF descriptor outperform ORB detector coupled with the ORB descriptor. |
2015-01-02 13:14:24 -0600 | asked a question | Adding rotation invariance to the BRIEF descriptor (contribution to OpenCV) Hi, I've implemented code to add rotation invariance to the BRIEF descriptor: cpp: https://github.com/GilLevi/opencv_con... header: https://github.com/GilLevi/opencv_con... tutorial: https://github.com/GilLevi/opencv/blo... The approach is explained and evaluated in my blog post: https://gilscvblog.wordpress.com/2015... Can someone please review my code and tell me what additional work is required in order to make a pull request? Thanks! Gil. |
2014-12-31 07:13:38 -0600 | received badge | ● Enthusiast |
2014-12-09 14:01:14 -0600 | marked best answer | Opencv_haartraining does not converge I'm running OpenCV2.4.7 on Windows8. I'm using opencv_traincascade to train a new cascade for faces. I ran the following command: However, it seems to get stuck: This happens every time I run it, I even tried to change the values to -minhitrate 0.8 -maxfalsealarm 0.7. The first time, it ran for 180 iterations producing the exact same values. have about 13,000 positives, but I set the npos to be 9000 so I won't run out of positive examples. I have to use the old function instead of traincascade as my colleague wrote his code using the old C interface. Can someone please explain the cause of this problem and how to fix it? Thanks, Gil |
2014-12-09 13:56:34 -0600 | marked best answer | Why do the values returned from Brisk's smoothedIntensity it are very large, much larger than intensity values? Hi, I have a question regarding Brisk's function "smoothedIntensity". Why do the values returned from it are very large, much larger than intensity values? Should they be the size of intensity values (since they are smoothed intensities)? And why does Brisks uses an integral image? I replaced the implementation with the following simple implementation that gives the sum of the 3x3 box around the pixel, could you please tell me if it's correct? The current smoothedIntensity implementation confused me, so I'm really not sure anymore. Thanks, Gil. |
2014-12-09 13:55:23 -0600 | marked best answer | Flower Detection Hi, I'm developing a flower detector and would be glad if anyone has some ideas I could try. Current directions I was thinking of:
Any other directions you can suggest? Thanks in advance, Gil. |
2014-12-09 13:53:57 -0600 | marked best answer | latentsvm_multidetect sample gives very bad results Hi, I'm using the sample file latentsvm_multidetect to test LatentSvmDetector. I'm using the models provided in OpenCVextra ("opencv_extra/testdata/cv/latentsvmdetector/models_VOC2007") and also the images provided there - one is of cars and the other is of a cat. The code compiles and runs, but I'm getting very bad results. I'm getting various detections of all kinds of object in the images (for example, in the cars image i'm getting about 80 detections of various objects when there are only six cars in the image). I'm running the code "as is", so I don't understand why this happens. Is there any flag I need to turn off/on or anything like that? Am I suppose to expect such results? Thank you, Gil. |
2014-12-09 13:52:30 -0600 | marked best answer | Using the SIFT and SURF descriptors in detector_descriptor_matcher_evaluation.cpp Hi, I'm conducting a comparison of descriptors using the code in (Example) detector_descriptor_matcher_evaluation.cpp. I managed to get FREAK, ORB, BRISK and BRIEF running, but I can't seem to get SIFT and SURF to work. The problem is that when calling The function "create" doesn't have SIFT and SURF in the list of algorithms. Can someone please explain to me how can I use SIFT and SURF in that framework? Thanks in advance! Gil. |
2014-12-09 13:52:20 -0600 | marked best answer | Problem accessing Mat Hi, I'm writing a simple program that extracts descriptors from images and writes them to files. I'm saving the descriptors in a Mat variable, but I'm getting wrong values when trying to access them. Here is the code: The line where I'm accessing the descriptors matrix is int gil = desc.at int(ix,jx); Is there something I'm doing wrong? Any help will be greatly appreciated, as I'm quite stuck :) Thanks, Gil. |
2014-12-09 13:40:47 -0600 | marked best answer | Haar-cascade training took very little time and no xml was produced I'm trying to train a new haar-cascade for faces. I have a positive dataset of 2000 cropped face images (just the face) and 3321 negative random images. I created positive's list using the following command: Where the file info.txt contains the following lines: Afterwords, I ran haar_training using the following command: Where the file infofile.txt contains the names of the background images: Training took about only an two hours and no xml file was generated. The folder harrcascade contains 20 folder with a txt file named 'AdaBoostCARTHaarClassifier.txt' but no xml was generated. I have two questions: 1.Why did training took so very little time? 2.Why no xml file was generated? What am I missing here? Thanks, Gil |
2014-12-09 13:16:25 -0600 | marked best answer | Cmake error when building OpenCV I'm trying to build OpenCV with Cmake on Windows 7. I chose to use the Visual Studio 10 compiler. I'm getting the following error: CMake Error at C:/Program Files (x86)/CMake 2.8/share/cmake-2.8/Modules/CMakeCXXInformation.cmake:37 (get_filename_component): get_filename_component called with incorrect number of arguments Call Stack (most recent call first): CMakeLists.txt:2 (PROJECT) I'm sure the path to OpenCV is correct and I haven't made any changes to CMakeLists.txt Can anyone please guide me as to how to fix this error? Thanks in advance!! |
2014-12-09 13:09:34 -0600 | marked best answer | How to filter a single column mat with Gaussian in OpenCV I have mat with only one column and 1600 rows. I want to filter it using a Gaussian. I tried the following: But I get the exact same values in AFilt (the filtered mat) and A. It looks like GaussianBlur has done nothing. What's the problem here? How can I smooth a single-column mat with a Gaussian kernel? I read about BaseColumnFilt, but haven't seen any usage examples so I'm not sure how to use them. Any help given will be greatly appreciated as I don't have a clue. I'm working with OpenCV 2.4.5 on windows 8 using Visual Studio 2012. Thanks Gil. |
2014-12-09 13:08:54 -0600 | marked best answer | Exception when constructing BRISK in debug mode but not in release Hi, I'm running the following simple code: It works fine in release mode, but in debug mode I get an exception Unhandled exception at at 0x000007FF9EA9811C in BriskBoosting1.exe: Microsoft C++ exception: std::length_error at memory location 0x000000A211DA9D70. How can the code work in release but not in debug? Can someone please shed some light on this problem? If that makes any difference, I'm using visual studio 2012 and running on windows (x64). Thanks in advance! |
2014-12-09 13:08:21 -0600 | marked best answer | Brisk does not calculate orientation when keypoints are provided Hi, I encountered something that looks a bit strange to me regarding Brisk's implementation. A common way to use Brisk is: However, when using it this way, we supply the keypoints to the Brisk descriptor and the flag "useProvidedKeypoints" is true, thus Brisk does not compute orientation: Is that a bug or am I missing something here about Brisk's implementation? Thanks in advance, Gil. |
2014-12-09 13:08:15 -0600 | marked best answer | Training new LatentSVMDetector Models. Hi, I haven't found any method to train new latent svm detector models using openCV. I'm currently using the existing models given in the xml files, but I would like to train my own. Is there any method for doing so? Thank you, Gil. |
2014-12-09 13:05:32 -0600 | marked best answer | InitModule_nonFree() - unresolved externel symbol. Hi, I'm writing a small program that extracts descriptors from images and writes them to files. I'm using (example) detector_descriptor_matcher_evaluation as reference. I guess this is a very simple problem, but I just can't solve it, I'm probably missing something: Everything compiled and worked fine (I used FAST and ORB) but I had to use sift so I added a call to cv::initModule_nonfree(); and an include :#include "opencv2/nonfree/nonfree.hpp" but now I'm getting a linker error: I'm quite sure all the definitions in the project properties are ok since it worked with ORB before I added the call to initModule_nonfree(). Can someone please tell me what I might be missing here and what could be the problem? Thanks. Also, another small question: what's the purpose of in the example detector_descriptor_matcher_evaluation ? here's the code I commented it out since it throws and exception. Is it ok not to use it? Thanks, Gil |
2014-11-29 07:29:14 -0600 | commented question | CVPR15 - OpenCV Vision Challenge. @StevenPuttemans, thanks for the advice! |
2014-11-28 05:20:18 -0600 | received badge | ● Nice Question (source) |
2014-11-27 08:33:24 -0600 | asked a question | CVPR15 - OpenCV Vision Challenge. Hi, OpenCV is sponsoring a vision challenge in the upcoming CVPR convention: http://code.opencv.org/projects/opencv/wiki/VisionChallenge The challenge involves 11 benchmarks of various computer vision problems, with the goal to contribute state of the art algorithms (and code) to OpenCV. Is anyone here thinks of participating? I'll be working on some of the "recognition" benchmarks. Gil. |
2014-11-11 06:13:59 -0600 | commented answer | Object classification (pedestrian, car, bike) I would try Caffe. |
2014-11-06 02:04:09 -0600 | asked a question | Regarding AKAZE features - descriptor_type enum Hi, AKAZE features have the following enum that describes the descriptor type: Just want to make sure - if I use DESCRIPTOR_MLDB (which is also the default), that means that AKAZE will be rotation invariant? Thanks, Gil. |
2014-10-30 10:15:32 -0600 | received badge | ● Nice Answer (source) |
2014-10-30 10:12:44 -0600 | marked best answer | Problems in adding a new descriptor to OpenCV Hi, I'm trying to add the BinBoost descriptor to OpenCV. The sources can be found here: link text It's really straightforward, as the authors already implemented the DescriptorExtractor class. The problem is that the constructors are dependent of certain binary files as input. They use them to initialize their inner structures. So one can easily construct a BinBoostDescriptorExtractor as But one cannot use the simpler "create" command as: What can I do about it? Will OpenCV moderators be willing to accept a new descriptor (or more precisely - a family of 3 descriptors) that can't be initialized using "create"? Thanks in advance, Gil |
2014-10-14 08:02:13 -0600 | asked a question | Having problems with resize/subsample (without interpolation) Hi, I'm trying to resize/subsample an image without interpolation. To make things clearer, I would like to replace the following code: int height_res=55; int width_res=55; With a simple resize statement. I tried to following: But it didn't give the exact same results (and I really need it to be precise). Can someone please correct me? Thanks, Gil. |
2014-10-09 12:41:25 -0600 | commented question | Building a simple 3d model : Using build3dmodel.cpp You can take a look at the blog post that I wrote which explains how to create 3D models using the Bundler and PMVS packages: http://gilscvblog.wordpress.com/2014/05/15/an-easy-and-practical-guide-to-3d-reconstruction/ |
2014-09-21 16:47:23 -0600 | commented answer | How to add an algorithm to OpenCV? Thanks for your help! |
2014-09-19 17:48:19 -0600 | asked a question | How to add an algorithm to OpenCV? Hi, I'm using OpenCV2.4.9 with Visual Studio 2012 on Windows 8.1 I added a new descriptor to OpenCV and I would like to test it outside of the OpenCV solution (in a new solution). How do I create updated lib and hpp files? I would like to have a new "build" directory that will contain all the updated files (lib, dll and hpp) according to the updated code. Do I need to apply CMake again? perhaps building the project "BUILD" in the OpenCV solution? Thanks in advance, Gil. |
2014-09-12 10:35:43 -0600 | asked a question | Extracting SIFT/SURF descriptor from pre-cropped patches Hi, I have a set of 100K 64x64 gray patches (that are already aligned, meaning they all have the same orientation) and I would like to extract a SIFT descriptor from each one. It is clear to me all I need to do is to define a vector with one keypoint kp such that: kp.x=32, kp.y=32. However, I don't know how to set the kp.size parameter. From going over SIFT's code, it looks as it's doing some non-trivial calculations with that parameter instead of just assuming that it's the size of the patch. Question 1: what should be the kp.size parameter when extracting SIFT descriptors from patches of size 64x64? Question 2: what should be the kp.size parameter when extracting SURF descriptors from patches of size 64x64? Thanks in advance, Gil. |
2014-09-03 14:00:26 -0600 | received badge | ● Nice Question (source) |