Ask Your Question

changelog's profile - activity

2020-11-15 03:40:56 -0600 received badge  Notable Question (source)
2020-11-02 01:06:17 -0600 received badge  Popular Question (source)
2018-01-11 05:28:12 -0600 received badge  Notable Question (source)
2017-12-19 12:08:42 -0600 received badge  Popular Question (source)
2016-11-05 19:06:05 -0600 received badge  Popular Question (source)
2014-04-09 05:39:10 -0600 received badge  Student (source)
2013-10-03 02:54:48 -0600 commented answer Best way not to have to copy a Mat back into an OutputArray

I'm doing it this way because I need it to behave the same way as any other instance of BackgroundSubtractor.

2013-10-02 10:10:18 -0600 asked a question Best way not to have to copy a Mat back into an OutputArray

I have this code that basically does a "dumb" background subtraction on two frames.

void FrameDifferenceBGS::operator()(cv::InputArray _image, cv::OutputArray _fgmask, double learningRate)
  cv::Mat img_input = _image.getMat();


  _fgmask.create(img_input.size(), CV_8U);
  cv::Mat img_foreground = _fgmask.getMat();


  cv::absdiff(img_input_prev, img_input, img_foreground);

  if(img_foreground.channels() == 3)
    cv::cvtColor(img_foreground, img_foreground, CV_BGR2GRAY);

    cv::threshold(img_foreground, img_foreground, threshold, 255, cv::THRESH_BINARY);

    cv::imshow("Frame Difference", img_foreground);

  firstTime = false;

If I don't add img_foreground.copyTo(_fgmask) in the end, the output array isn't updated with the result of img_foreground, resulting on a black image when this is called.

What am I doing wrong / should be doing here?

2013-09-20 03:17:21 -0600 commented question Background removal with changing light

Thank you, but it still suffers from the "Jurassic Park" problem. After a little while, the moving parts become a part of the background :-(

2013-09-19 05:20:49 -0600 received badge  Scholar (source)
2013-09-18 08:41:14 -0600 asked a question Background removal with changing light

I've got a project where I have a camera mounted on the ceiling pointing to a map on a table.

Before any pieces are placed on the map, I train a background subtractor to know what's background and what's not, and segment those pieces, so that I can get their shape & colors.

However, whenever the lighting changes (ie. you're playing inside next to a window and it starts getting darker) the whole thing falls apart.

This is the only way I've found to correctly identify moving parts within a "game board", but the lighting changes are basically throwing it in the wind.

Any ideas of what I can do to periodically adapt to the lighting conditions without having my pieces becoming part of the background? Any help would be appreciated.

2013-07-26 05:14:27 -0600 asked a question Background subtraction from a still image

I'm working on an application that will work with an inside mounted camera on the ceiling. The purpose is for it to keep track of objects on a surface.

I need to remove the background, so that I can get the contours of the "diff" that's there, but using BackgroundSubtractorMOG gets frustrating, as I find that its only application is for video.

What I need is to provide a single image that will be the background, and then calculate on each frame from a stream what has changed.

Here's what I have:

#include <libfreenect/libfreenect_sync.h>
#include <opencv2/opencv.hpp>
#include <iostream>
#include <vector>

const char *kBackgroundWindow = "Background";
const char *kForegroundWindow = "Foreground";
const char *kDiffWindow = "Diff";

const cv::Size kCameraSize(cv::Size(640, 480));

int main(int argc, char **argv) {
  uint8_t *raw_frame = (uint8_t *)malloc(640 * 480 * 3);
  uint32_t timestamp;

  // First, we show the background window. A key press will set the background
  // and move on to object detection.
  cv::Mat background(kCameraSize, CV_8UC3, cv::Scalar(0));
  for(;;) {
    freenect_sync_get_video((void **)&raw_frame, &timestamp, 0, FREENECT_VIDEO_RGB); = raw_frame;
    cv::cvtColor(background, background, CV_BGR2RGB);

    cv::imshow(kBackgroundWindow, background);
    if(cv::waitKey(10) > 0)

  // Create two windows, one to show the current feed and one to show the difference between
  // background and feed.

  // Canny threshold values for the track bars
  int cannyThresh1 = 20;
  int cannyThresh2 = 50;
  cv::createTrackbar("Canny Thresh 1", kDiffWindow, &cannyThresh1, 5000, NULL);
  cv::createTrackbar("Canny THresh 2", kDiffWindow, &cannyThresh2, 5000, NULL);

  // Start capturing frames.
  cv::Mat foreground(kCameraSize, CV_8UC3, cv::Scalar(0));
  cv::Mat diff(kCameraSize, CV_8UC3, cv::Scalar(0));

  cv::BackgroundSubtractorMOG2 bg_subtractor(101, 100.0, false);
  bg_subtractor(background, diff, 1);

  for(;;) {
    freenect_sync_get_video((void **)&raw_frame, &timestamp, 0, FREENECT_VIDEO_RGB); = raw_frame;
    cv::cvtColor(foreground, foreground, CV_BGR2RGB);
    // Calculate the difference between the background
    // and the foreground into diff.
    bg_subtractor(foreground, diff, 0.01);

    // Run the Canny edge detector in the resulting diff
    cv::Canny(diff, diff, cannyThresh1, cannyThresh2);

    cv::imshow(kForegroundWindow, foreground);
    cv::imshow(kDiffWindow, diff);


How can I change this so that it doesn't "learn" about the new background, but just uses the static image stored in background?