Ask Your Question

r.a.'s profile - activity

2018-11-10 06:08:21 -0600 received badge  Famous Question (source)
2017-05-12 18:04:18 -0600 received badge  Taxonomist
2016-12-06 08:53:50 -0600 received badge  Nice Answer (source)
2016-11-14 18:25:40 -0600 received badge  Nice Answer (source)
2016-05-30 09:04:10 -0600 received badge  Notable Question (source)
2015-08-20 02:08:07 -0600 received badge  Popular Question (source)
2013-12-10 08:51:36 -0600 asked a question Recommedation for a arm based like odroid imx6 for Like face detection application

Hi, I want to make a system using relatively low cost hardware,to application that can run face detection in 15 FPS or more.

Such as: odroid,imx6,pandoraboard ...

Requirments: 1.Opencv Easy integration and tested. 2.15 FPS or more when running face detection sample (preffered full frame rate) 3.Preffered a board that has large community of opencv or other computer vision. 4.Should be able to run for years.. (hopefully)

Any suggestions or tips based on your knowledge will be great.

2013-11-10 02:02:13 -0600 commented answer How to Combine Motion and Appearence descriptor

Hi, 1.Yes you can.(I didn't do so but You should be able to) 2.It is probably not the best OF Feature,But it may be a good point to start from. 3.Check type (int ,float...) 4.Do Hog.compute for Mx My seperately. 5.Try different Block sizes , smaller than your HOG.

2013-11-05 10:01:52 -0600 answered a question How to Combine Motion and Appearence descriptor

Hi,

This is the common scheme for such a case:

1.extract HOG feature; Extract Optical flow feature.

2.Normalize HOG vector;Normalize OF vector.

3.Concatenate HOG and OF vectors.

4.Normalize again the concatenated vector.

Some things to remember:

1.Try which is the best normalization method in your case L2 L1 ...

2.Try different vector length for each feature.

3.Try several Image flow techniques.

Good luck

2013-11-04 08:34:46 -0600 answered a question About Cascade Classifier Training

Hi,

1.In case of 2 different views of the same object. If there is a big differnce in the object apearance it is recommnded to train two classifiers. Yet if there is no such a big difference in the object apearance you may train only one classifier. Anyway try and you'll find out what is the best for you.(Post some positive images and I'll try to recommend )

2.The positive images may be color or gray it doesn't matter ,since the algortihm uses only the gray values and ignores the color information. The reason for that is that color information change heavily under different lighting conditions.

Good luck,

2013-11-04 04:43:27 -0600 received badge  Self-Learner (source)
2013-10-31 13:09:45 -0600 answered a question opencv_traincascade Negative samples training method

I'll try to summarize this topic to help others.

I have reviewed the code and thanks to StevenPuttemans help I think I finally got it.

1.The cascade - adaboost classifier has stages.

In each stage we have numPos and numNeg samples.

2.The numPos samples are the number of positive samples that are used as training samples in the i-th stage.They aren't the total number of samples in the vec file.

You may choose them to be for example 0.95*(number of samples in vec file).

3.The numNeg samples are the number of samples used in training of the ith stage. They are picked randomly (cropped and scaled from the negative images)

They could be more or less than the total number of negative images that you have.

For example: Suppose you have 1000 negative Images ,numNeg may be 5000 samples.

The samples that are picked are only those that were classified mistakenly as positive by the i-1 stage.

This is a good idea since we are sure that only more difficult negatives are going to next stage.

4.Hard negatives are those FA that you get after running your final classifier on a set of negative images or video.

You may add negative that are called hard negatives by just cropping and resizing your negatives to the positive sample width and height.

5.The acceptance ratio of the negatives is the number of negatives classified as positive divided by those which classified correctly as negatives in each stage. For example 1/1000 means that randomly picking 1000 windows of negatives from the negative images one of them is classified as positive.

As explained in 3 just those that classified as positive by the i-1 predictor will be the negatives of the i stage.

6.THE FA rate error is only per window ! I believe it was better if we have also a FA criteria on Image level and not just window level FA/Image(Of course there is a correlation between them,but still you want to know your error rate on image level) Currently just make sure that you have very low error on window level and hopefully you are good also in Image level.

I hope this will clarify some things to others as well.

2013-10-31 07:08:44 -0600 received badge  Self-Learner (source)
2013-10-31 03:45:27 -0600 answered a question opencv_traincascade negative samples training algorithm

After reviewing the code I can answer it myself:

1.The nPos and nNeg parameters are indeed the number of the new (they may be the same to previous stages read forward)training samples in each stage.

So, yes in each stage we pick nPos and nNeg samples that are predicted to be positive. Only nNeg negative samples that are mistakenly predicted as positive samples (according to i-1 stage classifier) are going to next round. Only npos samples that are predicted correctly as positive are going to next round too(If they mistakenly predicted to be negative in previous stage than there is no point to train them in this stage since they will be rejected in i-1 stage) This is good idea since we are sure that only hard negatives are going to next stage.

2.Yes as explained above.

3.Yes ,just crop and resize your negative to samples to the exact window's width and height,and you can add hard negatives.(see also http://answers.opencv.org/question/22964/opencv_traincascade-negative-samples-training/ the comment of StevenPuttemans)

4.I believe it was better if we have also a criteria on Image level and not just window level FA/Image(Of course there is a correlation between them,but still you want to know your error rate on image level)

If I'll do it I will share.

Currently just make sure that you have very low error on window level and hopefully you are good also in Image level.

I hope this will clarify some things to others as well.

2013-10-29 02:45:05 -0600 answered a question opencv_traincascade giving error: parameters cannot be written, because params.xml cannot be opened.

You have wrong parameters try something like this:

-data C:\MyProject\data -vec C:\MyProject\Vector.vec -bg C:\MyProject\Neg.txt -numPos 500 -numNeg 2000 -numStages 14 -precalcValBufSize 50 -precalcIdxBufSize 256 -featureType LBP -w 30 -h 26 -minHitRate 0.998 -maxFalseAlarmRate 0.5

Probably in -data you have specified a file instead of folder. See also opencv doc of train_cascade, you have there all the parameters and check in this forum to learn more on the parameters,there are many questions on this topic. Good luck!

2013-10-28 10:23:51 -0600 received badge  Nice Answer (source)
2013-10-28 09:56:17 -0600 received badge  Teacher (source)
2013-10-28 08:54:22 -0600 received badge  Student (source)
2013-10-28 08:05:12 -0600 commented answer What should the parameters given for the haartrainig to be a success?

O.k. actually you didn't look at:(You must look at the answer I have already told you) http://docs.opencv.org/doc/user_guide/ug_traincascade.html see that data is a folder! -data <cascade_dir_name> Anyway this is your third question open it as new question.

If I answered your original question please vote.

2013-10-28 07:58:54 -0600 commented answer opencv_traincascade Negative samples training method

O.k. thanks for the help,I appreciate that.

2013-10-28 05:32:57 -0600 commented answer What should the parameters given for the haartrainig to be a success?

Try opencv tutorial http://docs.opencv.org/doc/user_guide/ug_traincascade.html http://stackoverflow.com/questions/17184818/opencv-traincascade-for-lbp-training Try for example a line like this:(Starting point only) opencv_traincascade -data "data" -vec "samples.vec" -bg "out_negatives.dat" -numPos 500 -numNeg 2000 -numStages 16 -featureType LBP -w 20 -h 20 -bt GAB -minHitRate 0.995 -maxFalseAlarmRate 0.5 -weightTrimRate 0.95 -maxDepth 1 -maxWeakCount 100 I believe from now you have what you need to start.

2013-10-28 00:13:38 -0600 answered a question What should the parameters given for the haartrainig to be a success?

Hi, I also asked my self those questions I can tell you from my experience.(By the way look in this forum for more answers in this topic there are many of those!)

Regarding your question:

1.Your positive and Negative images must be correlated to the problem scenario you would like to solve!(Is there a typical background lighting condition ...) Try to have positive and negative samples that span the problem that you are trying to solve.

2.The positive samples should be approximately in the same rotation\view-point in all positive Images. If you need several viewpoints train several classifiers one for each viewpoint.

3.Use train_cascade instead of haartraining and use LBP feature for faster training.

4.Start with default params.

5.Try to use for example npos = 5-- and Nneg =2000 for start and check your result if it's not good enough add later more positive and negative samples.

Also as I said look here in this forum question on this topic.

Good luck!

2013-10-27 14:45:54 -0600 answered a question Averaging Video Frames Into One Image

1.You need to build a background model of the video.(The average frame) 2.Than the foreground is approximately = current frame - "Average frame".

look at those links with code and detailed explanation:

http://mateuszstankiewicz.eu/?p=189

Also have a look at opencv docs on background subtraction: http://docs.opencv.org/modules/video/doc/motion_analysis_and_object_tracking.html#backgroundsubtractor

Hope this helps

2013-10-27 14:34:14 -0600 received badge  Scholar (source)
2013-10-27 07:30:28 -0600 received badge  Editor (source)
2013-10-26 14:07:28 -0600 asked a question opencv_traincascade negative samples training algorithm

Hi, I am training successfully a classifier using train_cascade module but...

I don't understand couple of things. I have read http://answers.opencv.org/question/4368/traincascade-error-bad-argument-can-not-get-new/#4474 which is a detailed explain on npos parameter. But nneg is not explained and also the exact mechanism of choosing nneg samples in each stage.

1.Is it true that nneg of samples\windows used in each stage? So in each stage I have different nneg new samples?

2.Because I don't know the answer to question 1). Lets define - nneg_cascade the = number of neg samples in each stage. Is it true that in each stage we have new nneg_cascade which are classified as positive by the i-1 stage classifier?

3.Is there a way to force that a specific negative Image will be taken as a sample? for example to resize the Negative Image size to the positive vector width height size?

4.Is there a way to make sure that I'll have for example 1/1000 FA per Image on the negative training Images set?

I guess that one option is have enough positive and negative that span the problem and to use enough stages for example 30 and many negative windows (1G) with 0.5 FA rate (FA per cascade)^(number of cascades) and hopefully I'll have very small error rate per image.

But is there a better way?

Why not to have a threshold FA per Image instead of FA per window\sample criteria . (This is more close to real world needs to make sure I have let's say 1/1000 FA per frame at least on the train images!)

The problem that I can not directly to ask the trainer to give me that classification error rate.

Thanks for the help

2013-10-26 12:41:37 -0600 received badge  Supporter (source)
2013-10-26 12:12:02 -0600 commented answer opencv_traincascade Negative samples training method

Hi, thanks for the detailed answer. Just to make sure, 1.I understand that nneg is the number of negative windows and not Images! 2.So the FA is on those windows and not The Neagtive Images set!(That's why after finished to create a classifier I get many FA on those Train negative Images,as happened to me) 3.Is there a way to make sure that a certain Image will be added as a negative window for example if the size of the negative Image is like the size of the scanning window?

2013-10-25 01:36:29 -0600 asked a question opencv_traincascade Negative samples training method

Hi, I'm using successfully opencv traincascade module with LBP.

Still I'm not sure if my training is "logical"

I have two questions:

  1. Is nneg the number of negative samples, is simply the number of training Images or is the number of training samples taken from the images?(So there could be for example 1000 neg images but 2000 samples taken from those images as neagtives)
  2. FA Rate - Is it per FA/Image?So In each stage we scan the negative images and make sure that no one of them is positive(according to the desired FA)? I can ask it another way ,If the the FA is 0.5 and we have 14 stages so we have 0.5^14 error rate.Does this mean that after that the learning process is ended 'so on those negative Images the error will be 0.5^14 per frame?

Actually I don't understand the training process inside the function what happens in the stage when I asked for 0.995 Det rate and 0.5 FA rate.

The positive is simple and clear. But what about the neg samples? After choosing features do we run after on the neg scan them(block by block rescale and all this stuff)with the current classifier and check that we fulfilled the stage error?

I know it's a long question', but I hope for simple answer.

2012-08-09 10:49:53 -0600 asked a question Can't run OpenCV4Android samples in emulator

Hi, I'm trying simply to use opencv jni functionality for android. I've followed this: http://docs.opencv.org/doc/tutorials/introduction/android_binary_package/android_binary_package.html (Windows vista)

Installed:

  1. opencv 2.4.2 for android.(changed 2.4.2lib from API 9 to 11)
  2. eclipse hellio with ADT and CDT.
  3. NDK (last version)
  4. SDK platform (API 11)
  5. emulator with camera for API 11
  6. All opencv projects are at API 11.

Works:

  1. NDK examples ,compiled and run.

problems with opencv samples:

Compilation just fine, but no sample succeed run on emulator. I load the sample and asked to install the manager it fails(noo connection to google play) and I'm installing it by 'android' command line from cmd(arm7 too) but still after openning the applicaion still don't recognize the manager!

any idea?