Ask Your Question

LasOz's profile - activity

2019-06-17 03:20:28 -0600 received badge  Notable Question (source)
2018-12-03 23:11:16 -0600 received badge  Popular Question (source)
2016-08-11 09:53:20 -0600 commented answer Getting the correct frame rate

Hello, it makes the video run at roughly 3 times the video frame rate (64 FPS). I have tried changing a few things in your code but it had no affect on the result.

2016-08-10 15:39:22 -0600 asked a question Getting the correct frame rate

I am using OpenCV 3.1 on VS2015.

I have a video that, according to the file properties, runs at 26 FPS.

I am trying to set the waitKey in such a way that it will allow the video to play at the correct frame rate.

Here is a snippet of my code:

clock_t begin = clock(), end = clock();
unsigned int count = 0;
float FPS = 0;
int wait = (int)(1000.0/cap.get(CV_CAP_PROP_FPS));
std::cout << "Video show a frame every " << wait << " milliseconds (" << cap.get(CV_CAP_PROP_FPS) << " FPS)" << std::endl;

for (unsigned int i = 0; i < (frames-1); i++)
{
    if ((end - begin) / CLOCKS_PER_SEC >= 1)
    {
        FPS = (float)count;
        count = 0;
        begin = clock();
    }

    cv::putText(window_frame, std::to_string(FPS), cv::Point(10, window_frame.rows - 40), cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar::all(255), 2, 8);
    cv::imshow("Video", window_frame.clone());

    //Need to get frame rate correct, involving the waitkey
    if (cv::waitKey(wait) >= 0) break;
    end = clock();
    count++;
}

When the code is run the output is as follows:

Video show a frame every 37 milliseconds (26.9043 FPS)

However the variable FPS is reporting back 18 to 22. What is the reason for this? I would very much like the video frame rate and the program frame rate to match. I understand waitKey waits for a MINIMUM of the delay supplied to it, so it may not be the best "frame rate setting" device. Should I try to limit or force the framerate I want through other means?

2016-08-09 01:36:36 -0600 commented question Webcams work on linux but not windows

@berak cap = cv::VideoCapture(0); and I'm using cap.read() to get the frame.

2016-08-08 13:16:46 -0600 asked a question Webcams work on linux but not windows

I have a program that analyses live video from a webcam.

All webcams I have tried on windows do not work but the same webcams work on linux with no problem.

The issues I have on windows range from blank (just totally black) video feeds to output that is so low resolution and at such a poor frame rate it is unusable, even when forcing the resolution higher and frame rate higher through OpenCV commands the feed would not change.

Another irritating factor is the webcams seem to work fine through other programs such as Skype on Windows, so I feel it is safe to rule out driver issues?

What are possible reasons for this and how can I fix it?

2016-07-28 18:12:12 -0600 commented answer Getting better background subtraction

Damn, I can't believe I didn't try learning rate with a 0. I will give this a go.

2016-07-28 04:47:04 -0600 received badge  Student (source)
2016-07-27 10:31:47 -0600 asked a question Getting better background subtraction

I am trying to generate a mask of hands typing at a keyboard, by analyising every frame of an incoming video. Here is an example of the setup (with some debug info ontop):

image description

The mask I get currently looks like this, on which I have to perform 1 round of erosion on it to suppress some difference noise. (Note that the two images are not the same frame).

image description

My subtraction technique is very simple. I simple have an image of the keyboard with no hands in front of it and use it as the background template and find difference with every new frame.

As you can see it is not great and I would love any advice on getting this to be a near perfect mask of the entire hand. I have tried many different techniques such as:

  • Switching colour space to HSV (works better than BGR)
  • Blurring the images to be subtracted
  • Using the MOG2 background subtractor
  • MorphologyEx to dilate, open, and close the mask

And they all had their own problems.

Blurring caused "noise" to clump together causing bigger masses of difference, yes I did blur both input images.

MOG2 was really good except for one problem which I could not figure out. I would like to tell the built in background subtractor to not update its background model -ever-. Because however it chooses to subtract the background and generate a mask was very impressive when I turned the learning ratio down. So if anyone knows how I can tell MOG2 "use this image as the background model" that would be helpful.

MorphologyEx had a nasty habit of greatly enhancing noise no matter what I did.

Here is a small snipit of how I am calculating difference:

    float hue_thresh = 20.0f, sat_thresh = 0.f, val_thresh = 20.f;

for (int j = 0; j<diffImage.rows; ++j)
    for (int i = 0; i<diffImage.cols; ++i)
    {
        cv::Vec3b pix = diffImage.at<cv::Vec3b>(j, i);
        if (hue_thresh <= pix[0] && sat_thresh <= pix[1] && val_thresh <= pix[2])
        {
            foregroundMask.at<unsigned char>(j, i) = 255;
        }
    }

I understand it is not elegant.

I am hoping someone could give a suggestion that near perfectly subtracts the hands from the background. It's greatly frustrating for me because for me, as a human being, the hands are white and the keyboard and surface are dark colours and yet the differences are so small in pixel values, but I suppose this is the true problem with computer vision.

2016-07-22 18:16:48 -0600 received badge  Scholar (source)
2016-07-22 18:16:47 -0600 received badge  Supporter (source)
2016-07-22 13:14:54 -0600 asked a question Getting background subtraction to work

I am trying to use OpenCV's background subtraction in MSVS 2015. When I try to use it I get very unimpressive results compared to what I see online in examples and videos, and I am wondering what I am doing wrong.

This YouTube video I made demonstrates my problem.

It is hard to explain my problem but essentially I am getting a close to blank mask, and the background subtractor seems to focus on the tiniest of differences rather than massive blobs moving across the scene.

Here is the snippet of code I am using, ignore some unused variables:

void find_hand_bs(hand_detection_type type, std::vector<cv::Point> &digits, cv::Mat &input)
{
    cv::Mat frame = input.clone(); //current frame
    cv::Mat fgMaskMOG2; //fg mask fg mask generated by MOG2 method
    cv::Ptr<cv::BackgroundSubtractor> pMOG2; //MOG2 Background subtractor
    pMOG2 = cv::createBackgroundSubtractorMOG2(); //MOG2 approach
    pMOG2->apply(frame, fgMaskMOG2);
    imshow("FG Mask MOG 2", fgMaskMOG2);
}

If there is anyone out there with good background subtraction knowledge with OpenCV what is your take on my problem, what settings are going to improve my subtraction? In addition how do I correctly change those settings? Because I have tried to adjust the thresholds and gotten close to no change in the output so I feel I may not be doing it correctly.

2016-07-12 09:39:59 -0600 received badge  Enthusiast