Ask Your Question

Shaban's profile - activity

2020-06-09 07:59:28 -0600 received badge  Famous Question (source)
2019-02-21 13:00:05 -0600 received badge  Notable Question (source)
2018-08-08 01:58:59 -0600 received badge  Notable Question (source)
2018-03-11 01:38:42 -0600 received badge  Notable Question (source)
2017-06-29 07:36:07 -0600 received badge  Notable Question (source)
2017-05-16 12:56:58 -0600 received badge  Notable Question (source)
2017-04-24 17:02:41 -0600 received badge  Popular Question (source)
2016-11-07 23:14:01 -0600 received badge  Famous Question (source)
2016-08-23 18:54:00 -0600 received badge  Notable Question (source)
2016-05-24 07:18:10 -0600 received badge  Popular Question (source)
2016-04-28 19:37:58 -0600 received badge  Famous Question (source)
2016-03-15 10:55:06 -0600 received badge  Popular Question (source)
2015-12-12 05:10:24 -0600 received badge  Taxonomist
2015-12-02 02:23:01 -0600 received badge  Popular Question (source)
2015-10-12 04:15:45 -0600 received badge  Popular Question (source)
2015-09-16 09:37:57 -0600 received badge  Notable Question (source)
2015-09-05 16:59:28 -0600 received badge  Good Question (source)
2015-07-09 08:48:29 -0600 received badge  Popular Question (source)
2015-06-01 08:00:45 -0600 received badge  Notable Question (source)
2015-05-26 20:12:34 -0600 received badge  Notable Question (source)
2015-01-08 03:42:05 -0600 received badge  Popular Question (source)
2014-12-09 13:58:28 -0600 marked best answer Where're CascadeClassifier and DetectMultiScale Algorithm Location?

Hi guys, I wanna learn how HOG Descriptor, CascadeClassifier and DetectMultiScale Algorithms work (step by step). Can you tell me where're the location of them in OpenCV directory? I mean .cpp file or etc.

I'll appreciate any help here. Thanks! :)

2014-12-09 13:52:52 -0600 marked best answer How to tracking blob per pixel? (blob using findcontours)

Hi all,

There're my code below, and there's a place where I need to tracking a blob per pixel.

Can you tell my how to do that?

int main(int argc, char *argv[])
{
    cv::Mat frame;                                              
    cv::Mat fg;     
    cv::Mat blurred;
    cv::Mat thresholded;
    cv::Mat thresholded2;
    cv::Mat result;
    cv::Mat bgmodel;                                            
    cv::namedWindow("Frame");   
    cv::namedWindow("Background Model");
    cv::namedWindow("Blob");
    cv::VideoCapture cap("campus3.avi");    

    cv::BackgroundSubtractorMOG2 bgs;                           

        bgs.nmixtures = 3;
        bgs.history = 1000;
        bgs.varThresholdGen = 15;
        bgs.bShadowDetection = true;                            
        bgs.nShadowDetection = 0;                               
        bgs.fTau = 0.5;                                         

    std::vector<std::vector<cv::Point>> contours;               

    for(;;)
    {
        cap >> frame;                                           

        cv::GaussianBlur(frame,blurred,cv::Size(3,3),0,0,cv::BORDER_DEFAULT);

        bgs.operator()(blurred,fg);                         
        bgs.getBackgroundImage(bgmodel);                                

        cv::threshold(fg,thresholded,70.0f,255,CV_THRESH_BINARY);
        cv::threshold(fg,thresholded2,70.0f,255,CV_THRESH_BINARY);

        cv::Mat elementCLOSE(5,5,CV_8U,cv::Scalar(1));
        cv::morphologyEx(thresholded,thresholded,cv::MORPH_CLOSE,elementCLOSE);
        cv::morphologyEx(thresholded2,thresholded2,cv::MORPH_CLOSE,elementCLOSE);

        cv::findContours(thresholded,contours,CV_RETR_CCOMP,CV_CHAIN_APPROX_SIMPLE);
        cv::cvtColor(thresholded2,result,CV_GRAY2RGB);

        int cmin = 50; 
        int cmax = 1000;

        std::vector<std::vector<cv::Point>>::iterator itc=contours.begin();

        while (itc!=contours.end()) {   

                if (itc->size() > cmin && itc->size() < cmax){ 

                  //tracking blob here!

        }

        cv::imshow("Frame",frame);
        cv::imshow("Background Model",bgmodel);
        cv::imshow("Blob",result);
        if(cv::waitKey(30) >= 0) break;
    }
    return 0;
}

I'll appreciate any help here... Thanks :)

2014-12-09 13:52:07 -0600 marked best answer Error! Debug Assestion Failed! Expression: vector subscript out of range??

I've an issue here.. When I try to run this program I get an error that halts the program and says, "Debug Assertion Failed! Expression: vector subscript out of range"

Any idea what I'm doing wrong??

#include"stdafx.h"
#include<opencv2/opencv.hpp>
#include<iostream>
#include<vector>

int main(int argc, char *argv[])
{
    cv::Mat frame;                                              
    cv::Mat bgmodel;                                            
    cv::Mat fg;                                                 
    cv::VideoCapture cap(0);                                    

    cv::BackgroundSubtractorMOG2 bg;                            
    bg.nmixtures = 3;                                           
    bg.bShadowDetection = true;                                 
    bg.nShadowDetection = 0;                                    
    bg.fTau = 0.5;                                              

    std::vector<std::vector<cv::Point>> contours;
    std::vector<cv::Vec4i> hierarchy;                           

    cv::namedWindow("Frame");                                   
    cv::namedWindow("Background Model");                    

    for(;;)                                                     
    {
        cap >> frame;                                           

        bg.operator()(frame,fg);                                
        bg.getBackgroundImage(bgmodel);                         

        cv::erode(fg,fg,cv::Mat());                             
        cv::dilate(fg,fg,cv::Mat());                            


        float radius;
        cv::Point2f center;
        cv::minEnclosingCircle(cv::Mat(contours[1]),center,radius);

        cv::findContours(                                       
            fg,                                                 
            contours,                                           
            hierarchy,
            CV_RETR_CCOMP,                                      
            CV_CHAIN_APPROX_SIMPLE);                            

        int idx = 0;
        for(;idx >= 0; idx = hierarchy[idx][0])
        {
                cv::drawContours(
                    frame,                                              
                    contours,                                           
                    idx,                                                
                    cv::Scalar( 0, 0, 255 ),                                                
                    CV_FILLED,                                          
                    8,                                                  
                    hierarchy);                                         
        }

        cv::imshow("Frame",frame);                              
        cv::imshow("Background Model",bgmodel);                 

        if(cv::waitKey(30) >= 0) break;
    }
    return 0;
}
2014-12-09 13:52:00 -0600 marked best answer Blob Segmentation from Extracted Foreground

I'm newbie with OpenCV + C++ + Visual Studio 2012. And now I need to learn them. I just learned how to substract the background and extract the foreground. Here's the code for background substraction/foreground extraction.

    include opencv2/opencv.hpp
    include iostream
    include vector

int main(int argc, char *argv[])
{
    cv::Mat frame;
    cv::Mat bgmodel;
    cv::Mat fg;
    cv::VideoCapture cap(0);

cv::BackgroundSubtractorMOG2 bg;
bg.nmixtures = 3;
bg.bShadowDetection = true;

std::vector<std::vector<cv::Point> > contours;

cv::namedWindow("Frame");
cv::namedWindow("Background Model");

for(;;)
{
    cap >> frame;
    bg.operator ()(frame,fore);
    bg.getBackgroundImage(bgmodel);

    cv::erode(fg,fg,cv::Mat());
    cv::dilate(fg,fg,cv::Mat());

    cv::findContours(fg,contours,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_NONE);
    cv::drawContours(frame,contours,-1,cv::Scalar(0,0,255),2);

    cv::imshow("Frame",frame);
    cv::imshow("Background Model",bgmodel);
    if(cv::waitKey(30) >= 0) break;
}
return 0;

}

and now I want to build blobs from the extracted foreground. what should I do with my code? thanks. :)

2014-12-09 13:16:31 -0600 marked best answer OpenCV Error: Assertion failed BLA BLA BLA in unknown function...

Hi guys, I have a problem:

I wanna count the average color for each GBR (Green Blue Red) from a human. Firstly I wanna count the average color from the blob (I'm using "contour method"), and then I wanna count the average color from the rect where the human exist. But there's an error code:

OpenCV Error: Assertion failed (dims) = 2 && data && (unsigned)i0 < (unsigned)size.p[0] && (unsigned)size.p[1] in unknown function, file C:\OpenCV\include\opencv2\core/mat.hpp, line 459

I think the problems when I count the average color on the blob:

                        int totalx, totaly, totalz;
                        float avex, avey, avez;
                        totalx = 0;
                        totaly = 0;
                        totalz = 0;

                        for(int cpos = 0; cpos < contours[cnum].size(); cpos++){

                            //printf("[%d, %d]", contours[cnum][cpos].x, contours[cnum][cpos].y);
                            cv::Point3_ <uchar>* p = frame.ptr<cv::Point3_ <uchar> >(contours[cnum][cpos].x, contours[cnum][cpos].y);

                            int a = p -> x;
                            int b = p -> y;
                            int c = p -> z;

                            totalx += a;
                            totaly += b;
                            totalz += c;

                        }

                        avex = (float)totalx / contours[cnum].size();
                        avey = (float)totaly / contours[cnum].size();
                        avez = (float)totalz / contours[cnum].size();

                        std::cout << avex << std::endl;
                        std::cout << avey << std::endl;
                        std::cout << avez << std::endl;

Can you tell me why? I'll appreciate any help here.. Thanks! :)

And here we go whole my code:

// Shaban.cpp : Defines the entry point for the console application.

#include"stdafx.h"
#include<vector>
#include<iostream>
#include<opencv2/opencv.hpp>
#include<opencv2/core/core.hpp>
#include<opencv2/imgproc/imgproc.hpp>
#include<opencv2/highgui/highgui.hpp>

int main(int argc, char *argv[])
{
    cv::Mat frame;                                              
    cv::Mat fg;     
    cv::Mat blurred;
    cv::Mat thresholded;
    cv::Mat gray;
    cv::Mat blob;
    cv::Mat bgmodel;                                            
    cv::namedWindow("Frame");   
    cv::namedWindow("Background Model"/*,CV_WINDOW_NORMAL*/);
    //cv::resizeWindow("Background Model",400,300);
    cv::namedWindow("Blob"/*,CV_WINDOW_NORMAL*/);
    //cv::resizeWindow("Blob",400,300);
    //cv::VideoCapture cap(0);  
    cv::VideoCapture cap("campus3.avi");    

    cv::BackgroundSubtractorMOG2 bgs;                           

        bgs.nmixtures = 3;
        bgs.history = 1000;
        bgs.bShadowDetection = true;                            
        bgs.nShadowDetection = 0;                               
        bgs.fTau = 0.5;                                         

    std::vector<std::vector<cv::Point>> contours;               

    cv::CascadeClassifier human;
    assert(human.load("hogcascade_pedestrians.xml"));

    for(;;){

        cap >> frame;                           
        cv::GaussianBlur(frame,blurred,cv::Size(3,3),0,0,cv::BORDER_DEFAULT);

        bgs.operator()(blurred,fg);                         
        bgs.getBackgroundImage(bgmodel);                                

        cv::erode(fg,fg,cv::Mat(),cv::Point(-1,-1),1);                         
        cv::dilate(fg,fg,cv::Mat(),cv::Point(-1,-1),3);       

        cv::threshold(fg,thresholded,70.0f,255,CV_THRESH_BINARY);

        cv::findContours(thresholded,contours,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE);
        cv::cvtColor(thresholded,blob,CV_GRAY2RGB);
        cv::drawContours(blob,contours,-1,cv::Scalar(255,255,255),CV_FILLED,8);

        int cmin = 20; 
        int cmax = 1000;
        bool FOD1 = true;
        bool FOD2 = true;
        std::vector<cv::Rect> rects;

        for(int cnum = 0; cnum < contours.size(); cnum++){

            if(contours[cnum].size() > cmin && contours[cnum].size() < cmax){       

                human.detectMultiScale(frame, rects);   
                //printf("\nThe contour NO = %d size = %d \n", cnum, contours[cnum].size());

                if(rects.size() > 0){
                    for(unsigned int r = 0; r < rects.size(); r++) {

                        int totalx, totaly, totalz;
                        float avex, avey, avez;
                        totalx = 0;
                        totaly = 0 ...
(more)
2014-12-09 13:08:56 -0600 marked best answer Error when Cropping Human Image after detectMultiScale

Hi guys, I wanna crop detected human image using cv::Rect. This is my code:

int main(int argc, char *argv[])
{
 cv::Mat image = cv::imread("Man.jpg",1);
 cv::CascadeClassifier human;
 assert(human.load("hogcascade_pedestrians.xml"));


 std::vector<cv::Rect> rects;
 human.detectMultiScale(image, rects);   
 cv::Mat imgroi = image(rects);          //Error! Can't Convert Vector to Rect!

 cv::namedWindow("Original");
 cv::namedWindow("Cropped");
 cv::imshow("Original",image);
 cv::imshow("Cropped",imgroi);
 cv::waitKey(0);
 return 0;
}

I'll appreciate any help here. Thanks! :)

2014-11-19 03:00:52 -0600 received badge  Popular Question (source)
2014-10-02 02:57:30 -0600 received badge  Popular Question (source)
2014-10-01 00:41:28 -0600 received badge  Nice Answer (source)
2014-07-20 23:24:58 -0600 received badge  Teacher (source)
2014-02-20 09:06:55 -0600 marked best answer How to detect human using findcontours based on the human shape?

hi guys, I wanna ask how to detecting humans or pedestrians on blob (findcontours)? I've try to learn how to detecting any object on the frame using findcontours() like this:

#include"stdafx.h"
#include<vector>
#include<iostream>
#include<opencv2/opencv.hpp>
#include<opencv2/core/core.hpp>
#include<opencv2/imgproc/imgproc.hpp>
#include<opencv2/highgui/highgui.hpp>

int main(int argc, char *argv[])
{
    cv::Mat frame;                                              
    cv::Mat fg;     
    cv::Mat blurred;
    cv::Mat thresholded;
    cv::Mat thresholded2;
    cv::Mat result;
    cv::Mat bgmodel;                                            
    cv::namedWindow("Frame");   
    cv::namedWindow("Background Model"
        //,CV_WINDOW_NORMAL
        );
    //cv::resizeWindow("Background Model",400,300);
    cv::namedWindow("Blob"
        //,CV_WINDOW_NORMAL
        );
    //cv::resizeWindow("Blob",400,300);
    cv::VideoCapture cap("campus3.avi");    

    cv::BackgroundSubtractorMOG2 bgs;                           

        bgs.nmixtures = 3;
        bgs.history = 1000;
        bgs.varThresholdGen = 15;
        bgs.bShadowDetection = true;                            
        bgs.nShadowDetection = 0;                               
        bgs.fTau = 0.5;                                         

    std::vector<std::vector<cv::Point>> contours;               

    for(;;)
    {
        cap >> frame;                                           

        cv::GaussianBlur(frame,blurred,cv::Size(3,3),0,0,cv::BORDER_DEFAULT);

        bgs.operator()(blurred,fg);                         
        bgs.getBackgroundImage(bgmodel);                                

        cv::threshold(fg,thresholded,70.0f,255,CV_THRESH_BINARY);
        cv::threshold(fg,thresholded2,70.0f,255,CV_THRESH_BINARY);

        cv::Mat elementCLOSE(5,5,CV_8U,cv::Scalar(1));
        cv::morphologyEx(thresholded,thresholded,cv::MORPH_CLOSE,elementCLOSE);
        cv::morphologyEx(thresholded2,thresholded2,cv::MORPH_CLOSE,elementCLOSE);

        cv::findContours(thresholded,contours,CV_RETR_CCOMP,CV_CHAIN_APPROX_SIMPLE);
        cv::cvtColor(thresholded2,result,CV_GRAY2RGB);

        int cmin = 50; 
        int cmax = 1000;

        std::vector<std::vector<cv::Point>>::iterator itc=contours.begin();

        while (itc!=contours.end()) {   

                if (itc->size() > cmin && itc->size() < cmax){ 

                        std::vector<cv::Point> pts = *itc;
                        cv::Mat pointsMatrix = cv::Mat(pts);
                        cv::Scalar color( 0, 255, 0 );

                        cv::Rect r0= cv::boundingRect(pointsMatrix);
                        cv::rectangle(frame,r0,color,2);

                        ++itc;
                    }else{++itc;}
        }

        cv::imshow("Frame",frame);
        cv::imshow("Background Model",bgmodel);
        cv::imshow("Blob",result);
        if(cv::waitKey(30) >= 0) break;
    }
    return 0;
}

and now I wanna know how to detect humans? am I need to use hog? or haar? if yes I need to use them, how to use them? any tutorials to learn how to use it? because I'm so curious! and it's so much fun when I learn OpenCV! so addictive! :))

anyway I'll appreciate any help here, thanks. :)

2014-02-10 16:43:52 -0600 received badge  Self-Learner (source)
2014-02-10 16:43:51 -0600 received badge  Nice Question (source)
2013-12-04 18:35:24 -0600 asked a question How to resize boundingRect to fix size?

Hi guys, I wanna resize boundingRect to fix size and have the object inside the center of rect but I don't get some help after googling it. And try some several ways but didn't work, like:

cv::Rect r0 = cv::boundingRect(contours[cnum]);
r0.x = r0.x - 5;
r0.y = r0.y - 5;
r0.height = 60
r0.width = 100;
cv::rectangle(blob, r0, cv::Scalar(255, 0, 0));
human.detectMultiScale(frame(r0),rects);

Here we go my full code, look at the last line:

    cv::namedWindow("Frame");   
    cv::namedWindow("Background Model");
    cv::namedWindow("Blob");

    cv::VideoCapture cap("skenario c/3.avi");   

    cv::BackgroundSubtractorMOG2 bgs;                       
        bgs.nmixtures = 3;
        bgs.history = 1000;
        bgs.bShadowDetection = true;                            
        bgs.nShadowDetection = 0;                               
        bgs.fTau = 0.25;    

    std::vector<std::vector<cv::Point>> contours;               

    cv::CascadeClassifier human;
    assert(human.load("hogcascade_pedestrians.xml"));
    for(;;){
        cap >> frame;   

        cv::GaussianBlur(frame,blurred,cv::Size(3,3),0,0,cv::BORDER_DEFAULT);

        bgs.operator()(blurred,fg,0);                           
        bgs.getBackgroundImage(bgmodel);    

        cv::erode(fg,fg,cv::Mat(),cv::Point(-1,-1),1);                         
        cv::dilate(fg,fg,cv::Mat(),cv::Point(-1,-1),3); 

        cv::threshold(fg,threshfg,70.0f,255,CV_THRESH_BINARY);

        cv::findContours(threshfg,contours,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE);
        cv::cvtColor(threshfg,blob,CV_GRAY2RGB);
        cv::drawContours(blob,contours,-1,cv::Scalar(255,255,255),CV_FILLED,8);
        blob.copyTo(blobarray[(int)cap.get(CV_CAP_PROP_POS_FRAMES)]);

        int cmin = 20; 
        int cmax = 1000;
        bool FOD1 = true;
        bool FOD2 = true;
        std::vector<cv::Rect> rects;

        for(int cnum = 0; cnum < contours.size(); cnum++){

            if(contours[cnum].size() > cmin && contours[cnum].size() < cmax){       


        //I WANNA RESIZE HERE:
        human.detectMultiScale(frame(cv::boundingRect(contours[cnum])),rects);

I'll appreciate any help here, thanks. :)

2013-12-04 18:19:54 -0600 answered a question How to take "cv::Mat frame" inside contours?

Solved!

for(int cnum = 0; cnum < contours.size(); cnum++){

            if(contours[cnum].size() > cmin && contours[cnum].size() < cmax){       


                human.detectMultiScale(frame(cv::boundingRect(contours[cnum])),rects);

                if(rects.size() > 0){
                            cv::Rect r0 = cv::boundingRect(contours[cnum]);
                            cv::rectangle(frame, 
                                    r0,
                                    cv::Scalar(255, 0, 0));

                            cv::putText(frame,
                                "HUMAN",
                                cv::Point(r0.x + r0.width / 2, r0.y + r0.height / 2),
                                cv::FONT_HERSHEY_SIMPLEX,
                                0.5,
                                cv::Scalar(0,0,255),
                                2,
                                8);

                }
            }
2013-12-03 11:04:28 -0600 commented question How to take "cv::Mat frame" inside contours?

cv::rectangle(frame, contours[cnum], ???, cv::Scalar(255, 0, 0)); and now can you help me to replace "???" with?

2013-12-03 10:21:30 -0600 commented question How to take "cv::Mat frame" inside contours?

Ok! but I can't draw a rectangle around the detected human in orginal frame when using "frame(boundingRect(contours[cnum]))"

2013-12-03 09:10:20 -0600 asked a question How to take "cv::Mat frame" inside contours?

Hi I want to tracking human using detectMultiScale inside contours, how to do that?

This is my code, loot at the last line:

    cv::Mat frame;
    cv::Mat blurred;
    cv::Mat fg;     
    cv::Mat bgmodel;
    cv::Mat threshfg;
    cv::Mat blob;
    cv::Mat blobarray[10000];
    int pixblob = 0;
    int tot_bgr = 0;
    int tot_ex_bgr = 0;
    int green0 = 0;
    int green1 = 0;
    int green2 = 0;
    int green3 = 0;

    cv::namedWindow("Frame");   
    cv::namedWindow("Background Model");
    cv::namedWindow("Blob");

    cv::VideoCapture cap("campus.avi"); 

    cv::BackgroundSubtractorMOG2 bgs;                       
        bgs.nmixtures = 3;
        bgs.history = 500;
        bgs.bShadowDetection = true;                            
        bgs.nShadowDetection = 0;                               
        bgs.fTau = 0.25;                                        

    std::vector<std::vector<cv::Point>> contours;               

    cv::CascadeClassifier human;
    assert(human.load("hogcascade_pedestrians.xml"));
    for(;;){
        cap >> frame;   

        cv::GaussianBlur(frame,blurred,cv::Size(3,3),0,0,cv::BORDER_DEFAULT);

        bgs.operator()(blurred,fg);                         
        bgs.getBackgroundImage(bgmodel);                                

        cv::erode(fg,fg,cv::Mat(),cv::Point(-1,-1),1);                         
        cv::dilate(fg,fg,cv::Mat(),cv::Point(-1,-1),3); 

        cv::threshold(fg,threshfg,70.0f,255,CV_THRESH_BINARY);

        cv::findContours(threshfg,contours,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE);
        cv::cvtColor(threshfg,blob,CV_GRAY2RGB);
        cv::drawContours(blob,contours,-1,cv::Scalar(255,255,255),CV_FILLED,8);
        blob.copyTo(blobarray[(int)cap.get(CV_CAP_PROP_POS_FRAMES)]);

        int cmin = 20; 
        int cmax = 1000;
        bool FOD1 = true;
        bool FOD2 = true;
        std::vector<cv::Rect> rects;

        for(int cnum = 0; cnum < contours.size(); cnum++){

            if(contours[cnum].size() > cmin && contours[cnum].size() < cmax){       

                human.detectMultiScale(???, rects);

What should I replace ??? with? I wanna take Mat from frame inside contours. I'll appreciate any help here, thanks. :)

NB: sorry for my english. LOL

2013-12-03 08:48:43 -0600 answered a question How to tracking blob per pixel? (blob using findcontours)

int main(int argc, char *argv[]) { cv::Mat frame;
cv::Mat fg;
cv::Mat blurred; cv::Mat thresholded; cv::Mat thresholded2; cv::Mat result; cv::Mat bgmodel;
cv::namedWindow("Frame");
cv::namedWindow("Background Model"); cv::namedWindow("Blob"); cv::VideoCapture cap("campus3.avi");

cv::BackgroundSubtractorMOG2 bgs;                           

    bgs.nmixtures = 3;
    bgs.history = 1000;
    bgs.varThresholdGen = 15;
    bgs.bShadowDetection = true;                            
    bgs.nShadowDetection = 0;                               
    bgs.fTau = 0.5;                                         

std::vector<std::vector<cv::Point>> contours;               

for(;;)
{
    cap >> frame;                                           

    cv::GaussianBlur(frame,blurred,cv::Size(3,3),0,0,cv::BORDER_DEFAULT);

    bgs.operator()(blurred,fg);                         
    bgs.getBackgroundImage(bgmodel);                                

    cv::threshold(fg,thresholded,70.0f,255,CV_THRESH_BINARY);
    cv::threshold(fg,thresholded2,70.0f,255,CV_THRESH_BINARY);

    cv::Mat elementCLOSE(5,5,CV_8U,cv::Scalar(1));
    cv::morphologyEx(thresholded,thresholded,cv::MORPH_CLOSE,elementCLOSE);
    cv::morphologyEx(thresholded2,thresholded2,cv::MORPH_CLOSE,elementCLOSE);

    cv::findContours(thresholded,contours,CV_RETR_CCOMP,CV_CHAIN_APPROX_SIMPLE);
    cv::cvtColor(thresholded2,result,CV_GRAY2RGB);

    int cmin = 50; 
    int cmax = 1000;

    for(int cnum = 0; cnum < contours.size(); cnum++){

    if(contours[cnum].size() > cmin && contours[cnum].size() < cmax){       

            for(int cpos = 0; cpos < contours[cnum].size(); cpos++){
                              cv::Point3_ <uchar>* p = frame.ptr<cv::Point3_ <uchar> >(contours[cnum][cpos].y, contours[cnum][cpos].x); //tracking blob per pixel here :D
                            }
             }

    }

    cv::imshow("Frame",frame);
    cv::imshow("Background Model",bgmodel);
    cv::imshow("Blob",result);
    if(cv::waitKey(30) >= 0) break;
}
return 0;

}

2013-11-14 06:04:41 -0600 marked best answer How to remove detected human after DetectMultiScale?

Hi guys, now I'm workin on my False Human Detection projects. Can you tell me how to remove detected human after DetectMultiScale? So I can skip False Human Detection for the next frame detection.

For Example:

std::vector<cv::Rect> rects;
CascadeClassifier.detectMultiScale(frame, rects);

if(rects.size() > 0){
 for(unsigned int r = 0; r < rects.size(); r++){
   //DELETE DETECTED HUMAN (RECT) HERE!
 }
}

I'll appreciate any help here, thanks. :)