Ask Your Question

lepass7's profile - activity

2016-10-31 02:00:18 -0600 received badge  Famous Question (source)
2016-04-04 10:45:09 -0600 received badge  Notable Question (source)
2016-03-02 22:41:13 -0600 received badge  Nice Question (source)
2016-02-15 15:38:31 -0600 received badge  Student (source)
2016-01-10 23:50:47 -0600 received badge  Popular Question (source)
2014-08-11 12:41:00 -0600 received badge  Editor (source)
2014-08-11 12:40:16 -0600 asked a question opencv_traincascade Parameters explanation, image sizes etc

Hello guys, I am trying a long time now to train a descent classifier and to be able to get reliable results from an object detect script. I was trying to follow this tutorial: http://coding-robin.de/2013/07/22/train-your-own-opencv-haar-classifier.html (which by the way was very helpful). But during the process a lot of questions was born. I will try to ask all of them now and I think the answers will help a lot of people not just me.

  1. Negatives positive images:

    a. How many negatives and positive images should I have?

    b. Should positives be more than negatives? If yes what is the best proportion between negatives and positives?

    c. Is there a preferable format for the pictures (bmp, jpg, png etc)?

    d. What should be the size of negative pictures and what should be the size of positive images? Lets say my negative images are 640x320, and the "to be detected" object is 100x50. In negatives folder the images should all be 640x320?In positives folder should be 640x320 cropped images with visible on the object?Or should i place in positives folder images of 100x50 with the object only?

    e. Cropping positives images should be clear everything from background? Or should I use just rectangle around object, including some of the surrounding background?

    f. I tried to use the "famous" imageclipper program, with no luck. Does anyone done it? Is there any walk through tutorial to install this program?

    g. Opencv_createsamples: Is it necessary? How many samples should I use? About -w and -h I read a lot of tutorials on line that their saying that this should be proportional to real images. So all of my positive images should have exactly the same size? If my positive images are 100x50 and if i use as paramters -w 50 -h 25, images will be cropped or decrease? This is going to affect the training and finally the detection procedure?

  2. opencv_traincascade: Below are all the parameters:

    -vec <vec_file_name>

    -bg <background_file_name>

    [-numPos <number_of_positive_samples =="" 2000="">]

    [-numNeg <number_of_negative_samples =="" 1000="">]

    [-numStages <number_of_stages =="" 20="">]

    [-precalcValBufSize <precalculated_vals_buffer_size_in_mb =="" 256="">]

    [-precalcIdxBufSize <precalculated_idxs_buffer_size_in_mb =="" 256="">]

    [-baseFormatSave]

    --cascadeParams--

    [-stageType <boost(default)&gt;]< p="">

    [-featureType <{HAAR(default), LBP, HOG}>]

    [-w <samplewidth =="" 24="">]

    [-h <sampleheight =="" 24="">]

    --boostParams--

    [-bt <{DAB, RAB, LB, GAB(default)}>]

    [-minHitRate <min_hit_rate> = 0.995>]

    [-maxFalseAlarmRate <max_false_alarm_rate =="" 0.5="">]

    [-weightTrimRate <weight_trim_rate =="" 0.95="">]

    [-maxDepth <max_depth_of_weak_tree =="" 1="">]

    [-maxWeakCount <max_weak_tree_count =="" 100="">]

    --haarFeatureParams--

    [-mode <basic(default) |="" core="" |="" all<="" p="">

    --lbpFeatureParams--

    --HOGFeatureParams--

Can anyone explain all of these, what are they doing, how can affect the training and the detection.

  1. Training:

    During training i am getting those: ===== TRAINING 0-stage =====

POS count : consumed 400 : 400 NEG count : acceptanceRatio 1444 : 1 Precalculation time: 12 +----+---------+---------+ | N | HR | FA | +----+---------+---------+ | 1| 1| 1| +----+---------+---------+ | 2| 1| 1| +----+---------+---------+ | 3| 1| 0.454986| +----+---------+---------+

Training until now has taken 0 days 0 hours 20 minutes 11 seconds.

Can anyone explain that table and all the other information?

  1. After training. I have trained my classifier for 5 stages, and was able to find some objects on image (with a lot ...
(more)
2014-04-29 11:18:44 -0600 asked a question Is there any webcam buffering?

Hello huys, I am trying to read single frames from webcam. I can do that using this code:

while(True):
 print "\n\nMain while LOOP\n\n"
 q = raw_input("Take picture? y/n\n")
 if (q == "n"):
  cap.release()
  break
 elif (q == "y"):
  #cap.open(0)
  if(cap.isOpened()):
   start_t = time.clock()
   print "--Trying to capture image..."
   s = False
   frame = None
   try:
    i=0
    while(True):
     s, frame = cap.read()
     i = i + 1
     #cv2.imshow("cam-test", frame)
     cv2.waitKey(1)
     print "Attempts: ", i
     if s: break
    if s:
     print "ok"
   except Exception as inst:
    print "Error occured: ", type(inst)

quit()

As you can see program waits user to press "y" or "n", if he/she press "n" program stops and this is the end, but if he/she press "y" it will try to capture a single frame (the frame it will be processed and then user can see the processed single frame). When the processed is finalized and user sees the result it will return again back to "y" or "n" question. The problem is that if he/she press "y" the captured frame it will not be a "fresh" frame (what camera sees that exactly moment) but it will be the same frame as the first frame. I have noticed that a "fresh" frame camera will take the sixth time user press "y". I am guessing that somewhere (hardware or software) there is a buffer can I change that with opencv?