Ask Your Question
2

How to record the time of stay by detected people in a video?

asked 2015-12-24 10:51:34 -0600

muha.ko gravatar image

updated 2015-12-30 06:31:39 -0600

Hello everybody, Imagine a very easy scenario: Two people staying in a video without any intersection/overlap. I want to record the time of their stay in the video seperately. Would you recommend to do it with "background subtraction"? In which way I could solve this problem? I really need any hints and advices. Does anybody would help me please?

Thank you very much!

edit retag flag offensive close merge delete

3 answers

Sort by ยป oldest newest most voted
3

answered 2015-12-25 18:22:13 -0600

harsha gravatar image

updated 2015-12-30 10:55:09 -0600

You are able to detect people in a frame using HOG features. In order to compute how long a person was present in a video, we need to know the first and last frames of their presence in the video and the video's frame rate.

timeOfStay = (lastFrame - firstFrame) / frameRate

If you would like to record time of stay for each person, detection is not enough. We need to able to track them. Since there is no intersection of trajectories, a simple Kalman filter would suffice. Here is a good example: http://www.morethantechnical.com/2011...

You get the first frame of detection from initializing a kalman filter and last frame when you lose track of the person of interest. OpenCV provides both frame number and frame rate and hopefully this helps with your problem.

Edit: Including information on kalman filter and object tracking

HOG provides detections in a given frame. A video has multiple frames and we now have all these different detections in each of these frames. If we expect to see only one person, then just the occurrence of that person (HOG detection) is enough to identify which of the frames that person was present. In this case, we have multiple entities. We need to associate detections in each frame with detections in the previous frame and account for addition or deletion of entities.

Here are some good tutorials on kalman filter and object tracking: http://www.mathworks.com/help/vision/... https://www.youtube.com/watch?v=FkCT_... https://www.youtube.com/watch?v=NT7nY... https://www.youtube.com/watch?v=rUgKn...

There is some C++ and OpenCV code here: https://github.com/Smorodov/Multitarg...

edit flag offensive delete link more

Comments

Hello Harsha, Thank you very much for your quick response. Now I understood how to calculate the "timeOfStay" and how to handle with the frames. But what I dont understand is Kalman filter by itself tracking already persons? I think Kalman filter would be just an additional feature to remove noises, right? Do you maybe have an initial code which I could use as a basic concept?

I am looking forward for your further hints. Thanks in advance!

muha.ko gravatar imagemuha.ko ( 2015-12-27 07:05:57 -0600 )edit
1

I've updated the answer with links to some tutorials on kalman filter and object tracking. Hope it helps.

harsha gravatar imageharsha ( 2015-12-30 10:56:23 -0600 )edit

If the scenario isn't too complex, I think Kalman filter is a bit overkill. A simple matching using overlap of bounding box over frames will suffice to keep track of the detection.

Pedro Batista gravatar imagePedro Batista ( 2016-01-15 04:42:09 -0600 )edit

@Pedro Batista: Would you please give us more details what you mean by "using overlap of bounding box over frames"?

muha.ko gravatar imagemuha.ko ( 2016-01-17 03:20:08 -0600 )edit

Well, if you need to keep track of a detection with an ID, you can compare sequential detections (over different frames). If two sequential detection's bounding boxes are intercepting, it probably means it is the same object, than you can just update the new position of the object.

Pedro Batista gravatar imagePedro Batista ( 2016-01-18 04:37:41 -0600 )edit
1

answered 2015-12-30 15:36:46 -0600

updated 2016-01-14 10:55:04 -0600

i tried to implement a sample code just to be a starting point for a solution of your question.

could you try it and tell me your remarks. Maybe i will try to improve it or some ideas will raise.

i tested it with 768x576.avi can be found OpenCV's \samples\data

( tracking is not implemented yet. my main idea was speeding up pedestrian detection )

image description

for testing with OpenCV 2.4.x see Github Link

#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include<opencv2/objdetect/objdetect.hpp>
#include "opencv2/video/background_segm.hpp"

using namespace cv;
using namespace std;

int main( int argc, char** argv )
{
    char* filename = argc >= 2 ? argv[1] : (char*)"768x576.avi";
    VideoCapture capture( filename );

    //HOGDescriptor hog;
    //hog.setSVMDetector(hog.getDefaultPeopleDetector());

    HOGDescriptor hog( Size( 48, 96 ), Size( 16, 16 ), Size( 8, 8 ), Size( 8, 8 ), 9, 1, -1,
                       HOGDescriptor::L2Hys, 0.2, false, cv::HOGDescriptor::DEFAULT_NLEVELS);
    hog.setSVMDetector( HOGDescriptor::getDaimlerPeopleDetector() );

    Ptr<BackgroundSubtractor> bgS = createBackgroundSubtractorMOG2();
    Mat frame,output;

    while(true)
    {
        capture.read(frame);
        if (!frame.data)
            return 0;
        if( frame.cols > 800 )
            resize( frame, frame, Size(), 0.5, 0.5 );

        bgS->apply(frame, output);
        erode(output,output,Mat());

        // Find contours
        vector<vector<Point> > contours;
        findContours( output, contours, RETR_LIST, CHAIN_APPROX_SIMPLE );

        for ( size_t i = 0; i < contours.size(); i++)
        {
            Rect r = boundingRect( contours[i] );
            if( r.height > 80 & r.width < r.height )
            {
                r.x -= r.width / 2;
                r.y -= r.height / 2;
                r.width += r.width;
                r.height += r.height;
                r = r & Rect( 0, 0, frame.cols, frame.rows );

                Mat roi;
                cvtColor( frame( r ), roi, COLOR_BGR2GRAY);

                std::vector<Rect> rects;

                if( roi.cols > hog.winSize.width & roi.rows > hog.winSize.height )
                    hog.detectMultiScale( roi, rects);

                for (size_t i=0; i<rects.size(); i++)
                {
                    rects[i].x += r.x;
                    rects[i].y += r.y;

                    rectangle( frame, rects[i], Scalar( 0, 0, 255 ), 2 );
                }
            }
        }

        imshow("display", frame);
        if(waitKey(30)==27)
        {
            break;
        }
    }
    return 0;
}
edit flag offensive delete link more

Comments

Hi! excuse me but if I use code above it not runs because I get the error that tell me; 1_: error C3861: 'createBackgroundSubtractorMOG2': identifier not found 2_: error C2039: 'apply' : is not a member of 'cv::BackgroundSubtractor' Now how can I fix this? thanks you so much for help!

baro gravatar imagebaro ( 2016-01-14 03:17:16 -0600 )edit

What is your OpenCV version? the code runs well with OpenCV 3.x

sturkmen gravatar imagesturkmen ( 2016-01-14 06:12:54 -0600 )edit

@sturkmen I have 2.4.10 version of openCV. How I change it if I not change my openCV version? thanks you !

baro gravatar imagebaro ( 2016-01-14 06:54:10 -0600 )edit

see updated answer i added Github Link. i tested with 2.4.12

sturkmen gravatar imagesturkmen ( 2016-01-14 10:56:46 -0600 )edit
0

answered 2020-01-08 09:20:27 -0600

u31226 gravatar image

In order to achieve this in more efficent way, you need to have some kind of tracking enabled in you code. Once you have tracker, tracker will assign id to the detected person and then you can pretty much start a timer and keep counting against that person id. This way you also dont need to call your detection in each frame and you can improve some FPS.

Here, have a look at this, although its in C++ but you might get some basic idea: https://www.youtube.com/watch?v=VZjay...

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2015-12-24 10:51:34 -0600

Seen: 4,025 times

Last updated: Jan 14 '16