2015-07-17 13:04:01 -0500 answered a question How to find a "contour" of 3d object? It is not possible to get a 3D image from a set of images taken from the same point of view, but different focal lengths. Unless you misused technical terms and wanted to say something different. Changing the focal length simply means zooming in or out, so you have a series of images of the same object/s, someone are smaller some bigger, but the amount of information regarding that/those objects is the same. To calculate a 3D image you should look at the same scene from two or more points of view, so that the way objects are depicted is not exactly the same. 2015-06-19 03:25:12 -0500 commented question Haar Training got stuck At the beginning of all cycles _offset.x and _offset.y are both 0. When they come back to be both 0 it means that all possible negative windows have been checked. 2015-06-18 15:33:32 -0500 commented question Haar Training got stuck The source code of traincascade/imagestorage.cpp in function NegReader::nextImg() doesn't provide a mechanism to throw an error in such a case. But I have to say that stage 6 is very soon to be stuck for this reason, unless there are only very very few images. 2015-06-18 07:02:17 -0500 commented question Haar Training got stuck How many negative images have you got? If the algorithm is not able to collect 4000 negative windows from them it will run forever. 2015-06-18 02:04:17 -0500 commented question Haar Training got stuck There is nothing bad with your parameters (you could increase maxDepth, but this shouldn't be the reason you are stuck). What do you mean: "I got stuck at stage 6"? Do you get some error message? Or the training is completed at stage 6? How do you get all your positive samples? Are they artificial (i.e. obtained from just very few of them) or really 1000 different samples? Please show what is the output on the screen. 2015-06-17 12:10:12 -0500 received badge ● Necromancer (source) 2015-06-17 11:25:36 -0500 answered a question HAAR training fails at different stages Look at question: How to train cascade properly The answer is the last one, by Giuseppe Dini. 2015-06-17 07:27:49 -0500 commented question Haar Training got stuck Please add more information 2015-05-26 05:43:10 -0500 commented question How to do OpenCV in Windows Application? The way I've followed to do the same thing you are asking here is adviced against by most programmers. Anyway I want to share my experience. I used interop, that is I divided my application in a Opencv/c++ module and a c# one. I'm much more productive with visual studio with c# and at the same time I want to keep the more delicate and efficient part completely written in c++, which is the original language of OpenCv. I have no problem because I have described clearly a few functions in the c++ library that allow me the needed level of interaction and there are not bottlenecks. Of course this approach can only work if you know in advance what are this few functions and you don't want a complete control over the framework. Otherwise you can go with a wrapper as Emgu. 2015-05-26 05:23:46 -0500 commented question how to call image from memory of program? It is not clear here what you mean by "other program". What level of communication there is with this program? I suppose you have no control over the "other program" and you only know the size of the image, its format and a pointer to its data, On windows machines, if you have administrator provileges you can create a Mat header with the informations you have about the image and then copy the imagedata from the first program to your program through [DllImport("kernel32.dll")] public static extern bool ReadProcessMemory(int hProcess, int lpBaseAddress, byte[] lpBuffer, int dwSize, ref int lpNumberOfBytesRead);  2015-05-18 02:30:52 -0500 commented question Does opencv_traincascades give consistent results over time? Did you interrupt the training process and then resumed it? 2015-05-11 16:18:09 -0500 answered a question Does opencv_traincascades give consistent results over time? Have you ever heard about chaos theory and deterministic chaos? Well I think this an interesting case study, even though I dare everyone to find equations for it. The training algorithm is multithread when it comes to finding the best split of the decision tree. Inside function: CvDTreeSplit CvDTree::find_best_split( CvDTreeNode* node )  there is a call to cv::parallel_reduce, based on TBB. As far as I know, the collecting phase of negatives is single-threaded, instead. My hypothesis is that probably the parallel mechanism has been poorly designed with minor differences occurring at every run of the training algorithm, and those negligible differences eventually magnify, stage after stage. Even the detection algorithm is not predictable, but those minor differences remain minimal. EDIT I myself tried to reproduce the behaviour I observed same time ago (non-deterministic results during training), but I was not able to reproduce those conditions. Too much time has gone since then (now I use a really random mechanism to extract negatives). Anyway I reproduced a non-deterministic outcome for detection that I observed more recently. I’ve a program for automatic testing of a batch of about 500 images that records results. There no difference over many runs of it for the hit-rate and the false positive rate, but the average width and position of the detected rectangles is very slightly shifted. This is sufficient to say that the detection is not deterministic in those conditions (including the phase of rectangles gathering). My hypothesis was that this was linked to non-determinism in the training phase, and that those slight difference magnified over time, but as I cannot reproduce it anymore, I cannot add more or say it for sure. Some time ago a thread was opened by a user who said that he observed a random behaviour even though he used a precompiled .vec file of negatives. Anyway, a different source of randomness could be the interruption and resuming of the training process. The algorithm doesn’t record the last offset and it starts by collecting negative windows from a different position (I reproduced it yesterday and in fact it is non-deterministic). An alternative explanation is that the lack or defectiveness of just one image on different machines is enough to change the results. 2015-05-10 15:13:21 -0500 commented answer advice for a hand tracking algorithm I agree. In image processing there is no difference than in robot localisation. Anyway kalman filters relies on strong assumptions about the motion of the object (for example linear with constant velocity), techniques like particle filters are more general and fitted for tracking object inside images. 2015-05-09 02:42:23 -0500 commented question Neuronal network predict access violation How many samples are you using for training? 2015-05-09 02:33:32 -0500 commented question Existing something like Microsofts How-Old by OpenCV I think that such a task requires a huge amount of samples to train the estimator. Probably Microsfot itself, with all its means, should increase the number of samples, because it is still inaccurate. 2015-05-09 02:25:08 -0500 commented question Detection of texture portions in a image As far as I know, there is not already implemented code in Opencv to do such a job. You should write your own detector. I think that wavelets based techniques are the most fitted for this and wavelets are not difficult to calculate in Opencv, 2015-05-09 02:21:51 -0500 commented answer advice for a hand tracking algorithm You said it right, "can be improved". In the sense the trajectory can be softened, but the main problem of tracking remains. 2015-05-01 14:23:47 -0500 commented question Emotions from profile faces I've just sent an email to him. 2015-05-01 02:44:08 -0500 commented question Emotions from profile faces @berak, I've tested the face detector and it seems great! I've looked for some information about the algorithm, but I've mainly found links to their own website. Furthermore, incredibly there are only 2 citations reported by google scholar. How did you know about it? Anyway, I think that a self-implementation code of the ideas reported on the paper are not prohibited. 2015-05-01 02:07:44 -0500 commented question Emotions from profile faces @berak, do you know if "pico" is patented? 2015-04-29 14:13:49 -0500 commented answer Help me with the opencv_traincascade training I don't know neuroph, but I think that training a NN would be a good choice. (A NN with multiple outputs, each one corresponding to a class to be recognized) 2015-04-28 15:07:07 -0500 commented answer Help me with the opencv_traincascade training Unless you: 1) gather much many samples, 2) decide in advance a criteria to crop the area of the flower, 3) possibly restrict your detection goal only to flower from a certain point of view, 4) understand that you cannot use adaboost to classifie flowers, but just to detect where a generic flower is located, you will never obtain a decent result. 2015-04-26 15:35:02 -0500 commented answer Help me with the opencv_traincascade training Am I the only one to notice that -numPos was set to 1521 out of 10,000 available samples? 2015-04-26 14:19:01 -0500 answered a question Cascade training for closed eye detection It would be better using totally random images, but nobody prevent you from using some images of the open eye. It does not surprise me that your algorithm is not able to distinguish between right and left eyes as they are very similar. In principle you could consider some left eye as negative sample, but I fear this would undermine the hit rate (just because they are too similar). I think that the only solution here is to perform a face detection and search for eyes only on the side where you expect to find an eye. 2015-04-12 04:56:49 -0500 commented answer Multiple objects classification And I guess, too slow. :) 2015-04-12 03:34:09 -0500 answered a question Multiple objects classification Yes, traincascade can be used to detect objects of one particular type. The more variable these objects are in shape, the more difficult is the task. In principle, you could train a single cascade using, as positives, samples of all types of objects you want to detect [in phase 1] and eventually use some method to assign detected objects to one particular class [in phase 2]. For phase 2 you could use even an algorithm that is slower than boosting, as it only has to run over a very few detected objects. You could make experiments by yourself, but I’m skeptical about the results you could get following this route: there is too much variability among samples in phase 1 and this should slow down the running time and worsen the detection rate. (Anyway, when I can, I always make experiments for my projects to be sure that what I suspect is correct, many times I’ve discovered that thing are different than I imagined.) The classical way to achieve your goal is to train multiple classifiers, use them in turn for detection over each image and put the results together. Yes, the detection time will be the sum over all the detection times and the program will be slower. To reduce the overall detection time, you could use some tricks depending on the particular objects you are detecting. For example, if you are detecting faces and eyes you could run the detection algorithm for eyes only inside faces. If you are detecting big objects, you could take a larger minimal window size (small windows are the ones that slow down the running time the more, as many sliding windows have to be checked). If you are detecting oranges, you could only run the detection over areas with some particular colours. A large RAM doesn’t matter. For this purpose you need much a good CPU (number of cores and GHz). 2015-04-07 10:18:29 -0500 received badge ● Enthusiast 2015-04-06 12:40:24 -0500 received badge ● Good Answer (source) 2015-04-06 12:40:24 -0500 received badge ● Enlightened (source) 2015-04-04 06:53:06 -0500 edited answer traincascade detections with output score for precision recall curves As you know, we can image this kind of classifier as a function which assigns a couple of values to every window it gets as inputs: a rejectLevels, that is the integer value representing the stage where it was eventually rejected, and levelWeights, the double value the boosting algorithm outputs (the one thresholded to pass the next level of the cascade). The overloaded detectMultiScale(…)only considers and gathers the windows that reaches the last 4 stages (source code: if( classifier->data.stages.size() + result < 4 ). What you experienced depends only on the little number of samples used to train the classifier. In such a situation it could happen that just one weak classifier per stage can separate negatives from positives. If so only 2 values are assigned by each stage: -1.0 and +1.0 and every threshold between them can separate perfectly the two groups. Hence you get either a +1, when the sample is classified as positive (that is, it passes through all stages and the final one as well, keep in mind that there are many errors), or a -1 (stage last-1, last-2, last-3 and not passed). This explains also the reason why model 3 needs less stages to train: some stage requires more than one weak classifier and so it does a better job compared to the ones of model 1 and 2.