Ask Your Question

MartinTarrou's profile - activity

2019-04-09 06:11:28 -0600 received badge  Popular Question (source)
2016-01-24 21:33:19 -0600 asked a question Normalizing Sobel Filter Noise

I'm writing my own sobel filter and am having trouble getting it to run properly. I'm applying a vertical and horizontal convolution to a grayscale image, which appear to have some result when applied separately, but not together. I only get a slightly darkened image from the original.

As you can tell from my code, I've had some trouble with exactly how to normalize the results to be within 0-255. I believe 255/1442 is correct for grayscale, though if I change the multiplier, I get white spaces around corners. Though if I do this, I also get a lot of noise that seems to be extend towards the lower left corner. In the image, I've only applied the filter to a section of the image.

Am I using the right multiplier? Is there another problem I'm encountering?

Hopefully I'm not making a pixel coordinate mistake or something, I've gone through a lot of the code in the debugger and all seems to work okay from the algorithm descriptions I've read.

Thanks in advance for any help or advice.

image description

void sobel_filter(Mat im1, Mat im2, int x_size, int y_size){

int hweight[3][3] = { { -1, 0, 1 },
{ -2, 0, 2 },
{ -1, 0, 1 } };
int vweight[3][3] = { { -1, -2, -1 },
{ 0, 0, 0 },
{ 1, 2, 1 } };


//Apply filter
float gradx;
float grady;
for (int x = 0; x < x_size; x++){
    for (int y = 0; y < y_size; y++){
        gradx = 0;
        grady = 0;
        for (int cx = -1; cx < 2; cx++){
            for (int cy = -1; cy < 2; cy++){
                if (x + cx>0 && y + cy > 0){
                    gradx = gradx + hweight[cx + 1][cy + 1] * (int) im1.at<uchar>(y + cy, x + cx);
                    grady = grady + vweight[cx + 1][cy + 1] * (int) im1.at<uchar>(y + cy, x + cx);
                }
            }
        }
                   //Use pythagorean theorem to combine both directions.
        float pyth;
        pyth = sqrt(pow(gradx, 2) + pow(grady, 2));
        //pyth = pyth / 3;
        pyth = pyth * 255/1442;
        //pyth = pyth * 255 / (1442 * 3);
        //pyth = gradx * 255.0 / 1020.0;
        im2.at<uchar>(y, x) = (int) pyth;
    }
}

}

2016-01-21 22:25:14 -0600 asked a question Memory error accessing grayscale pixel data

Hi all,

Somewhat of a simple question, but haven't found the answer from searching around. I'm trying to work with direct pixel data, and am having trouble figuring out what determines the size of the mat so that I can determine my indexes. I set my capture to be 640x400, but attempting to access any pixel with an x-coordinate larger than 365 throws an exception for access violation. Resizing the mat seems to have no effect. Am I missing something simple here? My full code is below:

#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>

#include <iostream>
#include <stdio.h>

using namespace std;
using namespace cv;

VideoCapture vid;
int main(int argc, const char** argv)
{

    vid.open(0);
    vid.set(CV_CAP_PROP_FRAME_WIDTH, 640);
    vid.set(CV_CAP_PROP_FRAME_HEIGHT, 400);
    if (!vid.isOpened()) // check if we succeeded
        return -1;


    for (;;)
    {
        Mat frame;
        Mat gray;
        vid >> frame;
        cvtColor(frame, gray, CV_BGR2GRAY);
        //resize(gray, gray, Size(640, 400), 0, 0, INTER_CUBIC);
        imshow("webcam", gray);
        cout << (int) gray.at<uchar>(639, 399);
        waitKey(30);
    }
}

Thanks in advance for any help.

2015-11-18 19:56:16 -0600 commented question Headtracking while user is wearing HMD

I'm hoping to be able to distribute it without any additional hardware beyond the Morpheus and PS4 camera, so this unfortunately wouldn't work although I've considered it.

2015-11-18 02:27:06 -0600 received badge  Editor (source)
2015-11-17 20:14:35 -0600 asked a question Headtracking while user is wearing HMD

Working on a VR project where it is necessary to get data about the user's body via a second, external, stationary webcam. Specifically, I'm looking to track chest-movements for breathing. It seems like normally an easy way to get the chest position is by extrapolating from face-tracking data. However, since the user will always be wearing an HMD (i.e. Morepheus, Oculus), this will likely interfere with things.

Any suggestions for the best method? Would just tracking the mouth be possible? I'm just using a standard webcam for the second camera, so Kinect-style skeletal tracking would be hard.

Thanks for the help in advance.]

EDIT: Doing research, it seems a lot of facetracking programs use Haar Cascades. Would it be possible/realistic to train a new Haar Cascade xml based on pictures of users wearing HMD's?

2015-11-17 02:24:08 -0600 received badge  Supporter (source)
2015-11-17 02:23:53 -0600 received badge  Scholar (source)
2015-11-14 03:58:45 -0600 received badge  Student (source)
2015-11-12 02:16:56 -0600 received badge  Enthusiast
2015-11-10 20:28:40 -0600 asked a question Best method for breath tracking

Hey All,

I'm making a windows application that does breath tracking with a standard webcam. It detects when the user is inhaling, exhaling, and resting, and eventually I'd like to get more precise information about for example the speed and size of the breath (i.e. deep breathing or not). I currently have two prototypes using different methods and was wondering if people had input on which would be better long term.

Both detect general upward and downward movement in order to determine breath. The first is based on the motempl.c example program (which I have not been able to get running in OpenCV 3 yet, but can in earlier versions). Demo here: https://www.youtube.com/watch?v=KtVON... Code here: http://home.engineering.iastate.edu/~...

The other prototype I have uses Farneback Optical Flow. demo here: https://www.youtube.com/watch?v=tg0oj... example code: http://study.marearts.com/2014/04/ope...

I'm eventually hoping to port the project into UE4 using the OpenCV plugin if that makes a significant difference.

Mostly curious to know if one of the algorithms is definitively better for this kind of tracking. Biggest priority would be to focus detection on the chest so background/other movement doesn't hurt it too much.

Thanks in advance for the help.

2015-11-05 03:28:30 -0600 asked a question Running Sample Programs in Visual Studio 2013

Hey All, I have OpenCV working in Visual Studio 2013 and ran a test program with my webcam. I'd like to start working with motion tracking so I thought to play around with the Optical_Flow example which is contained in the samples folder. However, putting the cpp file into my project led to a whole hunt for why cudaarithm.hpp and cudaoptflow.hpp couldn't be found. I eventually put the headers, and the folders they were originally contained in with the same include directory as the other headers (i.e. core.hpp) but now I get a lot of LNK2019 errors which I frankly don't understand. Any advice on how exactly to run the sample code? Should I create a new project? Do I have to do some folder gymnastics?

Thanks in advance for the help.