Ask Your Question

atv's profile - activity

2020-11-22 03:58:23 -0600 received badge  Popular Question (source)
2018-10-23 11:55:51 -0600 received badge  Notable Question (source)
2017-11-09 17:12:21 -0600 received badge  Popular Question (source)
2017-01-12 09:37:07 -0600 asked a question neural networks /bif

I saw a post from berak about BIF and neural networks. I'm new to this, but would it have better results than using a eigenfaces method?

How would i use the below code? I understand faces/att is the directory with faces, but where does one put the corresponding labels for it to recognize the user? Or do i not understand it completely.

answers.opencv.org/question/119300/how-to-start-with-neural-network-implementation-with-opencv-and-c/

2016-12-29 17:12:09 -0600 asked a question ideal crop size for face recognition?

I'm using 144x144 now, but i was wondering if i should cut out the mouth and chin region, so it would not be affected so much with facial hair growth. Right my trained dataset is out of date the moment someone shaves of their beard. It's to user intrusive (and useless for any kind of monitoring to train every time this changes.

Also - would it be a good idea to segment out the face from the background on a training image? This way clothes would not be trained, and only the face.

2016-12-26 17:15:50 -0600 commented question detecting a photograph

Yes, that detects a border around a photograph, but it seems to find a rectangle everywhere. Any way i can tune the parameters maybe?

2016-12-25 13:44:13 -0600 commented question detecting a photograph

Well i'm trying to detect photographs on a live webcam. I noticed that it detects the photograph on the back fine (all white) but not the front where the picture is.

2016-12-24 13:51:25 -0600 asked a question detecting a photograph

Hi all, I'm trying to detect a photograph, maybe up to 3 or 4 at a time. I tried using https://github.com/opencv/opencv/blob...

But it works a bit too good (detecting lots of rectangles everywhere). If i hold up a book it identifies rectangles on the book, but not the book itself. Sometimes it does, but very rarely (Same for photograph). Is there any way of finding the closest contours to the screen? Or should i adjust the threshold (tried that).

Maybe i should go for blob detection ?

Thanks for your suggestions. atv

2016-12-16 15:26:06 -0600 commented question linked list iterate within main for loop impacts face processing

Thank you. Updated my question with code. I removed all the stuff and it seems the area() is actually what is taking so long. As i'm storing all the Rects of the faces in this linked list (so i can compare if they intersect), i'm not sure how to design around this (b in tmp is a Rect).

2016-12-16 14:55:42 -0600 asked a question linked list iterate within main for loop impacts face processing

Hi everyone. I'm using the following facerecognizer example. http://docs.opencv.org/2.4/modules/co...

Within the for loop in the main() function, i am iterating a linked list to search for intersecting rectangles. This seems to be having an impact in that it cant handle more then 2 faces.

What alternative is there? Should i use a vector? threads?

Thanks

update: This is the part i put in the for loop. The area() command seems to be the culprit.

tmp=head;
while(tmp) { 
bool intersects=((a&tmp->b).area()>0);  
if(intersects) cout << "intersected" << endl;
else cout << "not intersected" << endl;
tmp=tmp->next;

}

2016-12-15 06:16:15 -0600 commented question gaze display

I can give it a go, i had hoped the software had all that included already. I mean it clearly has the position of the pupil (which gets its position from findEyeCenter) so i thought it wouldnt be to hard to find a stable reference point.

Either clandmark or maybe just redetect the eyes with a cascade and using that?

2016-12-15 03:48:22 -0600 commented question gaze display

My code is loosely based on https://github.com/trishume/eyeLike, do you mind having a look at that? I replaced the circle functions in main.cpp with a line function.

I have the eyecenter, and the pupil movement, so i should be able to get this working.

2016-12-14 15:55:54 -0600 asked a question gaze display

I'm trying to create a line from a pupil into the screen, pointing in the direction where the pupil is pointing. A gaze detector if you will.

But for some reason i can only get the line end that terminates on the pupil to move, not the other end (which is what i want). It doesn't matter if i switch both Point() arguments to line() around.

line(image,rightpupil,rightpupil+cv::Point(-15,-2),CV_RGB(255,5,255),2,8,0,0.5); line(image,leftpupil,leftpupil+cv::Point(-15,-2),CV_RGB(255,5,255),2,8,0);

Is this a code thing or am i doing something wrong ?

2016-12-05 03:06:38 -0600 commented answer How to remove bad lighting conditions or shadow effects in images using opencv for processing face images

How does this differ from doing a "regular" equalizeHist(img, img) without splitting the channel ?

2016-11-06 05:42:41 -0600 commented question check if Rects intersect

haar_cascade.detectMultiScale(tracking, facesold,1.2,4,0|CASCADE_SCALE_IMAGE, Size(min_face_size,min_face_size),Size(max_face_size,max_face_size));

2016-11-06 05:39:08 -0600 commented question check if Rects intersect

// Find the faces in the frame: vector< Rect_<int> > faces; vector< Rect_<int> > facesold; Mat tracking=gray.clone(); haar_cascade.detectMultiScale(gray, faces,1.2,4,0|CASCADE_SCALE_IMAGE, Size(min_face_size,min_face_size),Size(max_face_size,max_face_size));

    Mat face_resized;
     for(int i = 0; i < faces.size(); i++) {
    cout << "Going into i loop" << endl;
    for(int j=0;j<facesold.size();j++) {
    cout << "Going into j loop" << endl;
    Rect a=faces[j];
    Rect b=facesold[j];
    //cout << "track face:" << i << a << b << endl;
    bool intersects=((a&b).area()>0);
    if(intersects) cout << "Tracking " << j <<  endl;
    else cout << "Lost tracking for" << j << endl;
    }

continued below

2016-11-06 05:39:02 -0600 commented question check if Rects intersect

I came up with something like this, to make a clone() and run detectMultiscale on it again. But it still isn't working as it should. I'm doing a detectmultiscale before and after, so i get a copy of the Rect vector. But it never seems to say that j < facesold.size(), and hence never goes into the j loop. When i add another face to the picture it does work, it seems there is some sort of fencepost error there.

2016-11-05 14:13:08 -0600 asked a question check if Rects intersect

hi all, I wanted to implement a simple check to see if the face in the new frame is the same as in the previous frame. I cobbled up something like below:

But i don't think its working. Is it because when i make a copy of the faces vector, it is not a deep copy?

   haar_cascade.detectMultiScale(gray, faces,1.2,4,0|CASCADE_SCALE_IMAGE, Size(min_face_size,min_face_size),Size(max_face_size,max_face_size));
    // At this point you have the position of the faces in
    // faces. Now we'll get the faces, make a prediction and
    // annotate it in the video. Cool or what?

    vector< Rect_<int> > track=faces; // Keep the old faces in a separate vector

    Mat face_resized;

    for(int i = 0; i < faces.size(); i++) {
    Rect a=track[i];
    Rect b=faces[i];
    bool intersects=((a&b).area()>0);
    if(intersects) cout << "same window" << endl;
    else cout << "different window" << endl;
        // Process face by face:
        Rect face_i = faces[i];
        // Crop the face from the image. So simple with OpenCV C++:

        Mat face = gray(face_i);
2016-11-01 18:39:10 -0600 asked a question how to resize image to 144x144

using cv::resize, would i just put 144,144 in the 4 and 5th argument?

2016-11-01 18:07:20 -0600 asked a question SVM opencv3 mog2

I'm converting some opencv2 code which uses SVM. In some posts i see that i should use mSVM->setC(10). What does that do ?

//CvSVM mSVM;
Ptr<ml::SVM> mSVM=ml::SVM::create();
mSVM->setC(10);

Also, what is the equivalent in opencv3 for converting below set command ?

TargetExtractor.cpp:    //mMOG->set("detectShadows", false);

Thanks!

2016-11-01 09:51:24 -0600 commented question resolution not responding properly to query

just to elaborate, by not showing up i mean it takes the requested resolution alright, but the cout does not always show the right resolution, almost as if it didnt have time yet to update it.

2016-11-01 04:35:10 -0600 asked a question resolution not responding properly to query

-If i set the resolution, then query it after, it doesn't (always) show up. Sometimes it does. But it's not consistent.

I know most of this is old syntax, i'm converting as i go along. Maybe that's the reason for the resolution reporting value being inconsistent. And yes i know i'm setting CAP_PROP_FRAME_WIDTH, and querying CV_CAP_PROP_FRAME_WIDTH. I can't set CV_CAP_PROP_FRAME_WIDTH, that's why :-)

    capture.set(cv::CAP_PROP_FRAME_WIDTH, 320);
    capture.set(cv::CAP_PROP_FRAME_HEIGHT, 240);

    // We'll allow these values to be adjusted later in our program
    // but as a default we base them on resolution
    cout << capture.get(CV_CAP_PROP_FRAME_WIDTH) << endl;
    cout << capture.get(CV_CAP_PROP_FRAME_HEIGHT) << endl;
    lvl1=capture.get(CV_CAP_PROP_FRAME_WIDTH)/30;
    lvl2=capture.get(CV_CAP_PROP_FRAME_WIDTH)/20;
    lvl3=capture.get(CV_CAP_PROP_FRAME_WIDTH)/10;
    cout << "Level 1 threshold set to: " << lvl1 << endl;
    cout << "Level 2 threshold set to: " << lvl2 << endl;
    cout << "Level 3 threshold set to: " << lvl3 << endl;

Any idea's would be appreciated.

2016-10-15 03:46:27 -0600 commented question gender classification

I'll try that.

Well i have trained some good models now. One with female bias (but that's because there's much more female pictures in the set) and 2 which are neutral, 50-50. They are quite good.

Why would a prediction come up as 0.00000 though? I can't seem to put my finger on it what causes this. Hence my question about randomness in the other topic.

2016-10-15 03:42:16 -0600 commented question training a model

Sorry i meant gender classification. But yes using the facerecognizer API. So no randomness ? Same input=Same results?

I'm just asking (because lack of knowledge really) but i sometimes get weird results. Today everything is going well though :-)

2016-10-15 02:46:57 -0600 asked a question training a model

Stupid question maybe, but does training a model always give the same results? If i remove it and re-train, can i expect the same model file ? (training on the same pictures and amount of pictures of course).

Is it a one time thing or once i get good results i can always deleted the saved model and retrain later, and get the same results.

2016-10-14 16:23:48 -0600 asked a question gender classification

I had fixed the eigenvalue 0.000000 thing i had the other day. Or so i thought. It seems to come up randomly when training a model. I don't know what is causing that to appear, i wish i did.

I used to think i needed an even amount of pictures, but this doesn't seem to be the case. Sometimes i need to delete, sometimes i need to make it even to get good results. Removal of one picture can influence the outcome of the training. If i don't get it right the value oscillates between male or female genders.

I currently have 600 female pictures, 290 male pictures, all aligned. Gender classification works very well. But yesterday i had an even better model, but somehow i deleted that (figures).

Anyway i guess what i'm trying to say is, i wish i had a better grasp on what influences this. As i'd want to build an even better model, with more pictures. But as long as i don't know what causes the 0.00000 or wild oscillation between labels, there's not much point. It's not the ratio of pictures, it seems, so i guess that leaves the type of pictures.

Is there something messing up my model? There's a thumbnail of someone, which is black when in thumbnail but when i open it , it shows fine. Must be a jpg thing (i save them as png when realigning). Face is also extracted and landmarks applied. So that can't be it.

PS this is not haartraining, i'm use the facerecognizer model->train.

2016-10-11 18:05:34 -0600 asked a question confidence 0.000000

I'm sure i'm missing something silly here, but i just trained and saved a model, for male and female, label 1 and 2,and i'm getting the above as confidence (and not changing), and only displaying female.

it worked fine for face recognition.

2016-10-08 03:40:15 -0600 commented question Haar Training detectMultiScale does not work very well when size vary

What method are you using to detect the parking sign? How have you trained the cascade? How many negative or positives ?

Some code would be helpful.

2016-10-08 03:38:35 -0600 commented answer detectMultiScale hangs on Mac / Xcode / OpenCV 3.1.0

That's interesting - if you can reproduce it. What OS version are you running? So does it gives a segmentation fault or does it just quit.

2016-10-07 10:13:08 -0600 answered a question size of the same image is different when it is read differently

The loader applies screen density scaling. see: http://stackoverflow.com/questions/73...

2016-10-07 10:01:33 -0600 commented question i'm trying to do drawContours in a detected object in a video frame my program was crash (Break) because of findContours() method... plzzz giving solution ...

Please state the error message you get. Less code, more pointers please. Also don't put the content of your question in the subject header :-)

2016-10-07 09:39:35 -0600 commented question Checking if face was in previous frame

How about something like this:

int framenumber;

for(;;) {
if(framenumber==2)
framenumber==0;

framenumber++;
for(;;) // faces

//after predict
if(predict) {
usernode.framenumber++;

if(framenumber+usernode.instances==usernode.framenumber+usernode.instances)  {
showprediction
framenumber=0;
usernode.framenumber=0;
}
2016-10-07 08:19:23 -0600 commented question Checking if face was in previous frame

I just need a simple way by way of a marker if the user appeared in 2 contiguous frames. I could tune detectmultiscale but i don't want to loose any precision in detecting faces.