Ask Your Question

Linus's profile - activity

2020-11-03 10:53:44 -0600 received badge  Famous Question (source)
2020-06-02 04:09:06 -0600 received badge  Popular Question (source)
2018-03-28 21:02:21 -0600 received badge  Notable Question (source)
2017-07-04 14:36:57 -0600 received badge  Popular Question (source)
2015-11-24 02:31:39 -0600 commented question Mask out face from image with OpenCV

@sturkmen Thanks, but it uses the obsolete API and I use opencv 3.0, however I will have a look at it.

2015-11-24 00:47:57 -0600 commented question Mask out face from image with OpenCV

@sturkmen Thank you that seems like it could work, although if possible I'd like to extract only the facial part like demonstrated in this picture but with OpenCV.

2015-11-23 15:06:43 -0600 asked a question Mask out face from image with OpenCV

Hi, I am trying to use OpenCV to extract only the face on a image but I don't want any of the background in my image (I only want the actual face contour). I tried to detect a face and extract the ROI and pass that to a skin detector algorithm but the latter step fails for the most part.

import cv2
import numpy as np
import sys

cascPath = sys.argv[1]
faceCascade = cv2.CascadeClassifier(cascPath)

lower = np.array([0, 48, 80], dtype = "uint8")
upper = np.array([20, 255, 255], dtype = "uint8")

video_capture = cv2.VideoCapture(0)

def face_detect():
    while True:
        # Capture frame-by-frame
        ret, frame =

        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

        faces = faceCascade.detectMultiScale(
            minSize=(30, 30),

        # Draw a rectangle around the faces
        for (x, y, w, h) in faces:
            #face_region = cv2.GetSubRect(image,(x, y, x+w, y+h))
            cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
            face_region = frame[y:y+h, x:x+w]
            converted = cv2.cvtColor(face_region, cv2.COLOR_BGR2HSV)
            skinMask = cv2.inRange(converted, lower, upper)

            # apply a series of erosions and dilations to the mask
            # using an elliptical kernel
            kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11, 11))
            skinMask = cv2.erode(skinMask, kernel, iterations = 2)
            skinMask = cv2.dilate(skinMask, kernel, iterations = 2)

            # blur the mask to help remove noise, then apply the
            # mask to the frame
            skinMask = cv2.GaussianBlur(skinMask, (3, 3), 0)
            skin = cv2.bitwise_and(face_region, face_region, mask = skinMask)

            cv2.imshow("Face", np.hstack([face_region, skin]))

            # Further processing on only the face without the background.

            #cv2.ellipse(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)

        # Display the resulting frame
        cv2.imshow('Video', frame)

        if cv2.waitKey(1) & 0xFF == ord('q'):


Is there a better approach to extract the face? Maybe there is a way to select the most common colours and threshold that and apply the mask on to my image.

So I would like to like to segment the face provided by guidelines such as the ones below using OpenCV instead. image description

The result would be something like this. image description

Please let me know if you have any ideas.

2015-03-23 07:48:59 -0600 received badge  Student (source)
2014-10-19 06:41:37 -0600 asked a question How to display an image with OpenCV as a Screen Saver on Windows

I am trying to create a screen saver with OpenCV and the scrnsave library on Windows. To start I have tried to display an image using the cv::imshow function. To do this I load the image from my computer with the cv::imread function, after that I instantiate a window with the name "image" and then get the handler of that window, and set the parent to the handler provided by the scrnsave library.

However, using the code below it will give me an gray window with nothing on it, it shouldn't be decorated with maximum, minimize or exit buttons. I want the actual image to be drawn to the foreground on top of everything with the handler provided by the Windows library, just like drawing a red circle onto a black background.

I didn't find anything on the web explaining how to implement OpenCV routines into my screen saver.

Here is my abbreviated code for my screen saver:

LRESULT WINAPI ScreenSaverProc(HWND hWnd, 
     UINT message, WPARAM wParam, LPARAM lParam)
        case WM_CREATE:{
             myImage = cv::imread("path/to/my/folder/image.png", CV_LOAD_IMAGE_UNCHANGED);
             HWND hWnd2 = (HWND) cvGetWindowHandle("image"); 
             ::SetParent(hWnd2, hWnd);
             ::ShowWindow(hWnd, SW_HIDE);
            return 0;
        case WM_DESTROY:{
            return 0;
    return DefScreenSaverProc(hWnd,message,wParam,lParam);

Once again it just gave me a empty and gray window, it seems to have created a separate window instead of drawing it onto the foreground.

2014-08-15 09:51:08 -0600 commented question OpenCV Error [Segmentation fault]: Converting data from OpenGL to OpenCV Matrix.

@berak Okay, I have now placed the code within the IF statement so cam->EndReadFrame(1); Is called afterwards. @boaz001 Thanks it didn't give me any compile errors, but still the segmentation fault remains, maybe there is something else about my code that is wrong? You can take a look here, and if it doesn't make sense you may want to look at the source code of the API, particularly graphics.cpp and camera.cpp.

2014-08-14 18:16:46 -0600 commented question OpenCV Error [Segmentation fault]: Converting data from OpenGL to OpenCV Matrix.

@berak Thank you, I now changed the third parameter to CV_8UC4, it is in fact ARGB from what I can understand and what I've read on here. It doesn't make much difference though, I'm not sure if you want me to just simply put TempMat.clone() after the Mat constructor call or something else, it gives me the same error without any other information (i.e Segmentation fault). Is it correct to use &frame_data? To refer to it as a pointer, it gives me a compiling error if I don't (see my above comment to @boaz001).

2014-08-14 18:11:01 -0600 commented question OpenCV Error [Segmentation fault]: Converting data from OpenGL to OpenCV Matrix.

@boaz001 Thank you, I did read that but found it a bit hard to understand, the problem is when I try to use frame_data instead of &frame_data on the cv:Mat constructor line, it gives me the error: invalid conversion from ´const void´ to ´void´ [-fpermissive]. Any ideas?

2014-08-13 16:47:58 -0600 asked a question OpenCV Error [Segmentation fault]: Converting data from OpenGL to OpenCV Matrix.

Hello, I stumbled across this API which captures images directly and converts the camera feed (raw YUV data) to RGBA and does this on the GPU. This is beneficial because I'm using my RPi and it doesn't have a lot of processing power, so I want a live camera feed which can be directly seen on screen for instance, and then the CPU will continuously process the data in the background.

The API lacks documentation and is very unclear on how to use with OpenCV, it is simple and efficient and is just want I need, so it really is quiet helpful if I can get it working. So this is the code I've tried this to convert the camera feed to a cv::Mat object with no success.

    const void* frame_data; int frame_sz;
        //if doing argb conversion the frame data will be exactly the right size so just set directly
    //convert the frame data to a cv::Mat object.
    cv::Mat TempMat = cv::Mat(MAIN_TEXTURE_HEIGHT, MAIN_TEXTURE_WIDTH, CV_8UC1, &frame_data,frame_sz);
    imshow("Camera Feed",TempMat);

Of course, this is just a small segment from the code, but it contains a pointer (cam) which refers to a class which contains these functions:

bool BeginReadFrame(int level, const void* &out_buffer, int& out_buffer_size);
void EndReadFrame(int level);

And the BeginReadFrame function basically make a call to some MMAL routines and reads from the camera output and converts it into RGBA format and the other function releases the data buffer back to the pool from whence it came.

So the API is pretty much a wrapper for the relatively complex library MMAL to make stuff a lot easier.

On the other hand, converting this frame data to a OpenCV Image is not shown how. I believe it is quiet simple, but I'm not very experienced in neither OpenCV or OpenGL.

I encourage you to check out the API to get a better idea of what it does, and how it works.

The Segmentation fault error is just printed in the Console and no other information is provided, it then terminates the program.

Thank you very much in advance!

2014-08-08 07:17:11 -0600 received badge  Scholar (source)
2014-08-07 07:15:36 -0600 commented answer Background substraction using OpenCV MOG from live camera feed.

I edited my question to provide more information about my System, would you mind have a look again? I would appreciate it a lot.

2014-08-07 07:15:34 -0600 received badge  Supporter (source)
2014-08-06 01:59:12 -0600 commented answer Background substraction using OpenCV MOG from live camera feed.

Still same result :(

2014-08-05 16:49:25 -0600 received badge  Editor (source)
2014-08-05 16:47:33 -0600 asked a question Background substraction using OpenCV MOG from live camera feed.

Hello, I am studying OpenCV and I'm having a lot of fun, but now I'm stuck with a problem where I'm trying to use a background subtraction algorithm to detect any changes.

I followed this tutorial and I managed to get it working to detect changes in a video file (AVI).

The problem I'm having right now is that it tends to incorrectly subtract the background noise and other small changes and instead fill the whole screen pretty much with white.

Here is my implementation of the MOG algorithm on a live camera feed, but the relevant part is this:

    VideoCapture cap;
    if (argc > 1)[1]);
    cap.set(CV_CAP_PROP_FOURCC ,CV_FOURCC('D', 'I', 'V', '3') );
    Mat frame, fgMaskMOG;

    Ptr<BackgroundSubtractor> pMOG = new BackgroundSubtractorMOG();
    for (;;)
        if(! {
            cerr << "Unable to read next frame." << endl;
        // process the image to obtain a mask image.
        pMOG->operator()(frame, fgMaskMOG);

        std::string time = getDateTime();
        // show image.
        imshow("Image", frame);
        int c = waitKey(30);
        if (c == 'q' || c == 'Q' || (c & 255) == 27)

This implementation works just fine for a video file as you can see: image description image description
But this is the result when I try to use MOG on a live camera feed: image description image description


EXPECTED RESULT: My expectation was the same as the video file (see pictures one and two above).
ACTUAL RESULT: The actual result was far from my expected result, much noise was generated (i.e not filtered out), when one put something in front of the camera, it would be black instead of white (inverse from the result of the video file).

- - - - SYSTEM DETAILS - - - -
OS: Windows x64bit
WEBCAM: I'm using my built-in webcam on my Satellite C660 TOSHIBA laptop.
image descriptionMicrosoft Visual Studio Express 2012 for Windows Desktop
image descriptionVersion 11.0.61030.00 Update 4
image descriptionMicrosoft .NET Framework
image descriptionVersion 4.5.50948
OpenCV Version: OpenCV V. 2.4.9, built for Windows, downloaded from SourceForge.
OUTPUT FROM cv::getBuildInformation(): OpenCV_BUILD.txt
Microsoft Visual Studio Project Property Sheet: OpenCV Project Property Sheet