2020-09-29 02:14:36 -0600 | received badge | ● Nice Question (source) |
2018-12-28 19:50:24 -0600 | received badge | ● Famous Question (source) |
2018-11-26 01:57:54 -0600 | received badge | ● Famous Question (source) |
2017-11-24 07:36:17 -0600 | marked best answer | traincascade: object detection size. I have trained a cascade using LBP to detect certain object. I have trained the cascade using 400 positive images of objects having same size (100 X 40). To train cascade, I used following command : opencv_traincascade -data data -vec object.vec -bg bg.txt -numPos 400 -numNeg 500 -numStages 14 -w 50 -h 20 -featureType LBP -maxFalseAlarmRate 0.4 -minHitRate 0.99 -precalcValBufSize 2048 -precalcIdxBufSize 2048 Now when I use this cascade on test images, will it detect objects only if they are of the size for which they are trained for or they can detect bigger objects too ? I tried using images (500 X 500) with object being more than (100 X 40) and it cannot detect objects in them. |
2017-06-06 04:30:45 -0600 | received badge | ● Notable Question (source) |
2017-01-31 02:38:41 -0600 | received badge | ● Popular Question (source) |
2016-10-15 03:49:35 -0600 | received badge | ● Nice Answer (source) |
2016-10-15 03:46:33 -0600 | marked best answer | traincascade : openMP vs Intel's TBB ? I have an Intel i5 processor with 8gb RAM. Ubuntu 14.04. I am working on cascade training with LBP on openCV 2.4.9. Training takes hell lot of time. After waiting for a week to train cascade, its really painful to see it not working correctly and figuring out that it needs to be trained on more samples. I tried installing opencv with TBB (thread building block) with no notable advantage in training. What else can I do for making it more time efficient. ? I found a link https://iamsrijon.wordpress.com/2013/... demonstrating the use of openMP. Is openMP better than TBB ? Any tutorial for reference. Any help would really be very helpul. |
2016-10-15 03:46:07 -0600 | received badge | ● Nice Question (source) |
2016-10-06 03:30:14 -0600 | received badge | ● Nice Answer (source) |
2016-04-26 08:20:27 -0600 | asked a question | active appearance model I have been using the dlib library to detect faces and its working really well. I delved a bit deeper into it and found its based on the concept of active appearance model(AAM) and active shape model(ASM). I found no explanations to the algorithm. All that internet resources seems to have is a series of steps to be followed without any understanding. I would be grateful if some one could explain how does it work? A simple intuition, some links to necessary resources would really make it simple for people like me to understand it. |
2016-02-01 04:54:34 -0600 | commented question | Installation on Linux Difficulties There are many tutorials available of which I would suggest you this. It should help you! |
2016-02-01 04:38:41 -0600 | commented answer | Save detected eyes in form of images In that case, mark the answer as correct answer so that the topic is closed and correct answer is visible to others too. |
2016-01-22 02:58:46 -0600 | edited question | How do I use openMP alongwith openCV I have an Intel i5 processor with 8gb RAM. Ubuntu 14.04. I am working on cascade training with LBP on openCV 2.4.9. Training takes hell lot of time. After waiting for a week to train cascade, its really painful to see it not working correctly and figuring out that it needs to be trained on more samples. Any means of shortening the time requirements ?? Any help would really be very helpul |
2016-01-14 02:23:52 -0600 | edited question | Save detected eyes in form of images I have been working on eye detection and I want to detect and then extract the eyes from video feed at specific interval. I want to store the detected eyes in form of images. I have done the detection of eyes using haar cascade. Now I just want to store the detected eyes in form of images. Can anyone tell me what I can do to for the problem ? The detection code is as follows } |
2016-01-13 02:55:49 -0600 | answered a question | Save detected eyes in form of images You are using 'haarcascades_eye_tree_eyeglasses.xml' which returns information of individual eyes. SInce, images cannot be round, you cannot save the individual eyes in circular form. But, you can extract the individual eyes as you did for extracting face. For this you can use eyes[j].x , eyes[j].y, eyes[j].width and eyes[j].height to extract rectangular part of eyes. Once you have the extracted part of eyes, you can use imwrite() to save it using the name you would like to assign as shown below
Alternatively, you can also use haarcascade_mcs_eyepair_big_xml to extract information about both eyes together. |
2016-01-12 06:46:59 -0600 | answered a question | motion COMPENSATION between 2 frames? I have dealt with similar problem before. Here is what you can do: 1) First, extract two consecutive frames (which I guess you already have) 2) Calculate the optical flow of the frames. Optical flow basically uses image matching algorithms like SIFT etc to know the location of object in frame 2 based on its features in frame 1. Thus, it has the displacement of the object from its position in frame 1 to its new position in frame 2. So, optical flow helps us calculate the magnitude of displacement and the direction in which displacement occurred for of all the points in the frame 2 as compared to frame 1. 3) Now, we can interpolate the frame between the two frames using the simple algebra rule. Interpolation is predicting the position of object in a frame located between two given frames. There are inbuilt functions to calculate the optical flow i.e motion estimation between two frames. You can find more details about optical flow function provided by OpenCV here. With this function, implementing motion estimation and generating interpolated frame should not be a problem. I hope this helps. |
2016-01-12 06:42:33 -0600 | answered a question | how to compile opencv from source with cmake? You can follow instructions at here . This tutorials deals with installation of OpenCV on Ubuntu using CMake. You can also find tutorials for installing OpenCV on other system at the same place. |
2016-01-12 06:37:58 -0600 | answered a question | opencv on ununtu error on pointing Cmake variable opencv_DIR to build of opencv You seem to have installed OpenCV for windows and trying to run them on Ubuntu which for obvious reasons would not work. You need to download correct version of OpenCV compatible with your system. Here is the guide to download and successfully install OpenCV libraries for Ubuntu. |
2016-01-12 06:33:01 -0600 | answered a question | OpenCV installation into eclipse You can check this out: link It explains and mentions every details you might need to get OpenCV working with Eclipse CDT. It has steps for linking your code to the OpenCV libraries without which your OpenCV dependent code will throw multiple errors.To further simplify the things for users like us, it also includes screenshots of various stages during setup process. A working example to make sure setup was successful is included at the end. Follow this tutorial till the end and you shall be able to successfully run OpenCV code. Hope this helps!! |
2016-01-12 06:25:04 -0600 | answered a question | Motion estimation between 2 frames I have dealt with similar problem before. Here is what you can do: 1) First, extract two consecutive frames (which I guess you already have) 2) Calculate the optical flow of the frames. Optical flow basically uses image matching algorithms like SIFT etc to know the location of object in frame 2 based on its features in frame 1. Thus, it has the displacement of the object from its position in frame 1 to its new position in frame 2. So, optical flow helps us calculate the magnitude of displacement and the direction in which displacement occurred for of all the points in the frame 2 as compared to frame 1. 3) Now, we can interpolate the frame between the two frames using the simple algebra rule. Interpolation is predicting the position of object in a frame located between two given frames. There are inbuilt functions to calculate the optical flow i.e motion estimation between two frames. You can find more details about optical flow function provided by OpenCV here. With this function, implementing motion estimation and generating interpolated frame should not be a problem. I hope this helps. |
2016-01-12 05:51:02 -0600 | commented question | How to start using houghlines in lane detection / tracking You can have a look at link It is explained very well and in simple language |
2015-10-08 10:43:27 -0600 | asked a question | Exact human shape extraction. I am trying to recognize human in images. I have tried using the haarcascade_fullbody.xml and hogcascade_pedestrians. Both give kind of okay results. I am trying to get the human body recognised. Are there any better methods to get it done. Also, I am interested in having the exact shape of human body extracted i.e silhouette of body and not the bounding rectangle box around the human. Could someone suggest me a way to be able to do this? |
2015-09-01 15:48:48 -0600 | received badge | ● Good Answer (source) |
2015-06-24 02:44:16 -0600 | commented question | How to find percentage of image content in a page Try using OCR (optical character recognition) technique over the scanned page. The area where the no characters are recognized could be classified as area having an image, assuming the image does not have any text over it. |
2015-06-11 03:05:13 -0600 | asked a question | object recognition using HOG features. I am extracting HogFeatures to extract information from images. These numerical values i.e Hog features are used to train SVM for object classification. Is it the correct way to make a classifier for object recognition ? Also, how do I go about having the SVM trained. |
2015-05-25 03:35:12 -0600 | commented answer | How to get the bit-depth of an image? My bad for the typing error. It should be 8 bits as it returns CV_8U. What I simply mean is depth( ) function returns number of bits used to represent a single channel. |
2015-05-25 00:53:29 -0600 | answered a question | How to get the bit-depth of an image? Check this. It says that |
2015-05-24 07:02:43 -0600 | answered a question | Traincascade parameters -> -numPos and -numNeg While training a cascade, we use more negative image samples as compared to positive images. The reason behind doing so is enabling the cascade to reject non-object region easily and thus reducing false detection and increasing computational efficiency. While training the cascade, we decide the window size. The cascade can detect objects with minimum size that of window. Also, it randomly picks samples of window size from the negative images. So you could have only 1000 unique negative images of 500 X 500 but -numNeg could be 10000 or more with window size of 50 X 50. This helps having more negative samples withoutthe need of having unique negative images. Hope this clears stuff! |
2015-05-23 01:06:04 -0600 | commented answer | understanding the limitations of traincascade The entire image (object with background) is given to the cascade for training! The .txt file that we make for positive data contains the details of image name alongwith the details of boundary of object. So, you do not use the cropped image, but use the entire image with details of object location. Hope this clears. |
2015-05-21 06:14:36 -0600 | answered a question | understanding the limitations of traincascade Though there are no strict rules while training cascade for object detection, here are certain results which I have gathered during my experiments with Training Cascade. I will first answer your questions and as and when required will add my points.
Having a large number of positive samples and even greater number of negative samples (generally 3-4 times) proves useful. All said, its trial and experiments for your specific application that makes cascade perform well. |
2015-05-18 08:41:27 -0600 | asked a question | Training your own model I am using flandmark library and Dlib library for various purposes. Both have seperate pre-built model for facial landmark detection. From what I read and understood, various images with annonated landmarks are fed to a cascade which can then predict the landmarks in test images. I am interested in understanding how it works. Can someone explain how is the model trained and generated. Is there any document available explaining it ? Besides, is it possible to have your own model generated ? |
2015-05-11 07:03:40 -0600 | commented question | cascade classifier distinguish and recognize similar objects @Lorena GdL: if haar_cascades are classifiers, why cant they be used to classify faces into various types like male/female or classify it into different type of expressions? This made me jump to conclusion that cascades can simply detect presence of certain object but cant be used for classification. Could you explain ? |
2015-05-11 03:19:00 -0600 | commented question | cascade classifier distinguish and recognize similar objects The cascades are not Classifiers.! They can be used to detect a particular pose of car. eg: If I train a cascade to detect the cars with side view, it will try detecting car with side-view in the test image. The positive images should ideally have object seperated from the background. Besides,more the number of the positive and negative samples, the better detection rate can be expected from the cascade. Typical training of cascade using LBP features wont take more than few hours. For classification, you could probably use SVM.! |
2015-05-11 01:27:38 -0600 | commented question | cascade classifier distinguish and recognize similar objects what do you mean by pose of car? Do want to classify it into side view of car and front view of car? Adaboost cascade can be used to recognize one particular pose of car in picture for which the cascade was trained. Also the problem in your case is:Extracting features from three positive images is very unreliable. You need a huge number of positive images dataset and much more number of negative number dataset. Besides, while training the cascade to detect a paticular pose of car in image, make sure all the positive images have same pose. |
2015-04-27 02:13:10 -0600 | answered a question | undefined reference while using EclipseCDT and opencv This is a typical linker error. Link all the libraries from openCV correctly to your project. Make sure you have the 'highgui' linked to your project as that seems to be the most probable reason for the problem you are facing. You can also look at this to make sure no other errors occur. Hope this helps. |