Ask Your Question

_Robert's profile - activity

2019-01-25 13:37:30 -0600 received badge  Self-Learner (source)
2019-01-25 13:37:30 -0600 received badge  Teacher (source)
2014-09-30 16:31:17 -0600 commented question Is it possible to run filter2D at just one point in an image?

Just seen a note from the documentation which suggests this will just work by cropping a 1px ROI - Note When the source image is a part (ROI) of a bigger image, the function will try to use the pixels outside of the ROI to form a border. To disable this feature and always do extrapolation, as if src was not a ROI, use borderType | BORDER_ISOLATED.

2014-09-30 15:29:04 -0600 asked a question Is it possible to run filter2D at just one point in an image?

I am currently running filter2D:

cv::filter2D(source, 
              dest, 
              CV_64F, 
              kernal, 
              cv::Point(-1,-1), 
              0, 
              cv::BORDER_CONSTANT);

However, I am not interested in convoluting the whole image with a kernel, I just want to run the convolution at a given point.

Is this possible?

2014-09-18 15:40:39 -0600 commented question filter2D and sepFilter2D, specifying the value for a constant border

The source for filter2D does a copyMakeBorder anyway, perhaps this is cheaper than doing a range check on every iteration as that would introduce branches? (i.e. If inside use the pixel value else use a constant value) https://github.com/Itseez/opencv/blob/5f590ebed084a5002c9013e11c519dcb139d47e9/modules/ts/src/ts_func.cpp#L787

2014-09-16 14:33:09 -0600 received badge  Scholar (source)
2014-09-15 14:51:56 -0600 answered a question Does using Gabor Energy disregard the sign of the Gabor kernel?

From looking at the some sample source code I realise my mistake. https://github.com/juancamilog/gpu_convolve_test/blob/master/gpu_convolve_test.cpp

The convolution step sums the multiplication of corresponding pixels, which takes into account the sign of the Gabor kernel.

2014-09-15 14:17:57 -0600 commented answer how to extract gabor feature using opencv?

What are you supposed to do with dest to get a feature out of it. I though you had to sum the squares of the pixel values to get the 'Gabor energy', however, this seems to loose the sign of the Gabor filter, see my question here: http://answers.opencv.org/question/42056/does-using-gabor-energy-disregard-the-sign-of-the/

2014-09-15 14:13:26 -0600 asked a question Does using Gabor Energy disregard the sign of the Gabor kernel?

I am looking into feature extraction using Gabor filters. I belive the steps are:

  1. Generate a set of gabor kernals. (An matrix of floating point number valuing between -1, +1)
  2. Convert the image into a floating point matrix.
  3. Convolute each Gabor kernel with the image, centred at each pixel in turn. (i.e. The first output image is the input image mutiplied by the pixel value of the coresponding pixel in the gabor kernal, then center the gabor kernal at the next pixel and repeat)
  4. Calculate the 'energy' of the result by summing the squares of each pixel.

I think I must be missing something since if that was the case then the sign of the Gabor kernel would not matter.

For example if a pixel had a intensity of 0.1 and it was convoluted with a Gabor kernel with a corresponding pixel value of 0.5 then the output would be

(0.1 * 0.5)^2 = 0.0025

This would be the same if the gabor kernal had a value of -0.5

(0.1 * -0.5)^2 = 0.0025

Therefore it would not matter what sign the Gabor kernels would be, therefore these 2 kernels would be effectively identical.

enter image description here enter image description here

2014-09-13 16:26:16 -0600 received badge  Nice Question (source)
2014-09-13 06:52:30 -0600 asked a question Why is the default Gabor phase offset 90 degrees?

This is using the default (CV_PI*0.5,) phase offset getGaborKernel(size, 8.0, 0.0, 16.0, 1.0);

This is using a zero phase offset getGaborKernel(size, 8.0, 0.0, 16.0, 1.0, 0.0);

enter image description here

So it seems that default phase offset (90 deg) removes the symmetry of the Gabor kernel. I have seen some other references where they use the same offset so I guess its standard conversion.

Why is this the default? Is it generally more useful to have this for feature extraction?

Also asked on stack overflow.

2014-03-19 12:29:07 -0600 commented question How do I pass ownership of pixel data to cv::Mat

Thanks :) I was just frustrated since I asked the question then figured it out. Ill post the code once I confirm it works.

2014-03-19 11:30:37 -0600 commented question How do I pass ownership of pixel data to cv::Mat

Ah I think I am being an Idiot... Instead of allocating the buffer myself, then doing the external framework operation then passing it in to the Mat constructor I can get the Mat constructor to allocate the buffer for me then pass that to the external framework. That way the buffer will be memory managed by the Mat

2014-03-19 11:19:23 -0600 commented question How do I pass ownership of pixel data to cv::Mat

Good suggestion - that would do the job. However, the reason why I have external data is because I want to see if Apple's Accelerate framework speeds up a affine transform. So Ideally, I would like to avoid the copy.

2014-03-19 10:59:24 -0600 asked a question How do I pass ownership of pixel data to cv::Mat

I am creating a cv::Mat passing in pixel data that I have allocated externally.

cv::Mat transformedResult(vImageResult.height,
                          vImageResult.width,
                          CV_8UC1,
                          vImageResult.data);

I would like the cv::Mat to take ownership of the bytes (i.e. create a refCount and free the bytes when it reaches zero) . However the documentation says

Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it.

  • If I free the underlying vImageResult.data immediately, the I will get a bad access crash somewhere down the line.
  • If I don't free the underlying vImageResult.data then the data will leek.

Is there a way to pass ownership?

2014-01-15 14:37:55 -0600 asked a question What are the high level steps to classify a facial expression?

I want to tell if a face in an image is smiling or winking or shouting etc...

I have made an program that works quite well that does the following:

  1. Find the face rectangle using the usual methods (e.g. haarcascade_frontalface_alt.xml)
  2. Crop the face and account for rotations
  3. Resize to a fixed size.
  4. Filter the image (e.g. blur & equalise the histograms etc..)
  5. Run a series of custom HAAR classifiers once on the resultant image (e.g. smile classifier, wink classifier... etc)
  6. Choose the expression with the highest score.

Are there other method I can be using? I have seen SVM and Gabor vector mentioned elsewhere but I don't know how they work.

2013-12-22 07:24:36 -0600 asked a question How do I reduce the effects of lighting conditions for a face expression classification system?

I have started making a system of telling what expression a given face has. This is my method:

TRAIN CLASSIFIERS

  • Find and crop the face rectangle.
  • Capture sample face rectangles with different facial expressions (Happy, sad, neutral)
  • Train several cascade classifiers using opencv_traincascade currently using:

    -numStages 1 
    -stageType BOOST 
    -featureType HAAR 
    -w 42 
    -h 53 
    -bt GAB 
    -minHitRate 0.99 
    -maxFalseAlarmRate 0.009 
    -weightTrimRate 0.95 
    -maxDepth 1 
    -maxWeakCount 100
    

    (I don't totally understand all of these parameters)

USE CLASSIFIERS

  • Find and crop the face rectangle.
  • Run the face over each cascade classifier.
  • Pick the classifier with the highest result.

So far it has worked quite well, however, it is a little sensitive to lighting conditions.

I want add some kind of filter after before training and classification to reduce the effects of lighting conditions, but I am not sure what ones will work best:

  • Gabor?
  • Laplace?
  • Canny edge?
  • Gaussian blur?
  • Equalise histogram? etc...
  • Use LBP?

What filters might improve the results? or are there any other way too improve my face recognition system?

2013-12-19 16:12:34 -0600 commented answer Convert old haarcascade file to new cascade classifier format

Hello - I am also trying to convert the old format into the new format, however, this answer still uses the oldCascade property. Is there any way to use the new featureEvalulator format? I have also asked a similar question on stack overflow: http://stackoverflow.com/questions/20692660/how-would-i-convert-old-format-haarcascades-into-the-new-format-xml

2013-12-14 15:04:51 -0600 asked a question Why does opencv_traincascade not ignore nodes with a false alarm rate of 1

Im using opencv_traincascade with a one stage classifier. I don't really know how it works but it seems like it guesses at rectangles ('features' in CV terminology?) to try and divide the positive samples from the negative sample.

  • HR is hit rate - the proportion of positive samples that are (correctly) passed though.
  • FA is false alarm rate - the proportion of negative samples that are (incorrectly) passed though.

Is my understanding correct?

My output is looks like this:

===== TRAINING 0-stage =====
<BEGIN
POS count : consumed   27 : 27
NEG count : acceptanceRatio    416 : 1
Precalculation time: 3
+----+---------+---------+
|  N |    HR   |    FA   |
+----+---------+---------+
|   1|        1|        1|
+----+---------+---------+
|   2|        1|        1|
+----+---------+---------+
|   3|        1|0.0576923|
+----+---------+---------+
|   4|        1|0.00480769|
+----+---------+---------+
END>

Why does it not ignore feature/node/rectangle number 1 and number 2 since they appear to simply let though everything?

2013-10-22 17:33:48 -0600 commented question What object detention algorithms used by Open CV are patented?

@stereomatching thanks for your reply. Forgive my ignorance, what are SURF and SIFT? Are they used as part of any object detection algorithms?

2013-10-22 16:51:15 -0600 asked a question What object detention algorithms used by Open CV are patented?

I read here that the HAAR tree based detection might be covered by the patents:

http://rafaelmizrahi.blogspot.co.uk/2007/02/intel-opencv-face-detection-license.html

What other algorithms are patented? What about Local binary patters?

2013-10-22 15:36:25 -0600 commented question CascadeClassifier wont detect image using runAt that it has already found using dectectMultiScale

It worked. Got my answer in at last!

2013-10-22 15:35:48 -0600 answered a question CascadeClassifier wont detect image using runAt that it has already found using dectectMultiScale

The problem was that the image provided was at a different scale to the image that was detected. A classifier has its native window size that it must detect at. This can be exposed publicly in subclass like so:

cv::Size RSCascadeClassifier::windowSize()
{
    cv::Size windowSize = data.origWinSize;

    // Note: this will not work with old format XML classifiers. 

    return windowSize;
}

Then its just a case of resizing the image to the size of this window:

cv::Rect faceRect = objects[0];
cv::Mat foundFaceImage = rotatedFullImage(faceRect).clone();

cv::Size classifierSize = _faceLBPClassifier.windowSize();

cv::Mat scaledFace;
cv::resize(foundFaceImage, scaledFace, classifierSize, 0, 0, INTER_LINEAR);

double weight;
int result = _faceLBPClassifier.runOnceOnWholeImage(scaledFace, weight);

This this fixes the issue 90% of the time, however, sometimes it still wont fully detect the image. I think this is due averaging / merging of a few overlapping rectangles.

2013-10-18 13:05:15 -0600 commented question CascadeClassifier wont detect image using runAt that it has already found using dectectMultiScale

@barak - It says I have to wait 1 more day before I can answer it. It was really frustrating because it didn't tell me until I hit submit and it deleted my (quite detailed) answer. But thanks for the up vote :)

2013-10-18 05:32:26 -0600 received badge  Student (source)
2013-10-18 05:23:08 -0600 commented question CascadeClassifier wont detect image using runAt that it has already found using dectectMultiScale

I have figured this out but I cant post an answer because I am a new user :(

2013-10-17 17:21:02 -0600 commented answer LBP based Face Detection

Hello @venky, how do you specify the number of feature to use at each stage? I could not see this in the documentation. http://docs.opencv.org/doc/user_guide/ug_traincascade.html

2013-10-17 15:27:14 -0600 received badge  Editor (source)
2013-10-17 10:51:45 -0600 asked a question CascadeClassifier wont detect image using runAt that it has already found using dectectMultiScale

I want to make a function that just runs the classifier once using runAt, for when you already know where the image should be. (E.g. if you have found a face but want to run several classifier over the face to decide if its happy or sad).

This was my attempt (I had to make a subclass since runAt is protected).

int RSCascadeClassifier::runOnceOnWholeImage(const cv::Mat& image,
                                             double & gypWeight)
{
    cv::Size size(image.cols, image.rows);

    // Need to set image first, see:
    // http://docs.opencv.org/modules/objdetect/doc/cascade_classification.html#featureevaluator-setimage
    bool success = featureEvaluator->setImage( image, size );

    if (!success)
    {
        CV_Assert( false );
        return 0;
    }

    // Once once over the whole image
    int result = runAt(this->featureEvaluator, cv::Point(0, 0), gypWeight);

    return result;
}

To test this out I got the result of a successful detectMultiScale, then cropped the image and sent it to my method:

faceLBPClassifier.detectMultiScale(fullImage,
                                    objects,
                                    1.1, // Scale factor
                                    2, // Min neighbours
                                    CV_HAAR_SCALE_IMAGE | 0, // Flags
                                    cv::Size( 80 , 120 ) // Min size
                                    );

if (objects.size() > 0)
{
    cv::Rect faceRect = objects[0];
    cv::Mat foundFace = fullImage(faceRect).clone();

    double weight;
    int result = faceLBPClassifier.runOnceOnWholeImage(foundFace, weight);

    NSLog(@"r:%d w:%f", result, weight);
}

The results were:

r:0 w:-2.135099
r:-2 w:-2.469498
r:0 w:-2.135099
r:-1 w:-3.106470
r:0 w:-2.135099  ...

I.e. no success (1) results.

This should always return success since the same classifier has already found the result in that the exact rectangle.

Does anyone know what I have done wrong?

2013-10-17 10:22:56 -0600 asked a question Open CV traincascade gets stuck with a Hit rate of 1 and a False alarm rate of 0

I want to train a LBP classifier. I have 103 positive and 500 negative samples. I used almost default values, except for -featureType LBP and -numPos 88.

opencv_traincascade -data "$NAME"_Output \
                    -vec "$NAME".vec \
                    -bg "$NAME"_Negative.txt \
                    -numPos 88 \
                    -numNeg 500 \
                    -numStages 20 \
                    -stageType BOOST \
                    -featureType LBP \
                    -w 32 \
                    -h 48 \
                    -bt GAB \
                    -minHitRate 0.995 \
                    -maxFalseAlarmRate 0.5 \
                    -weightTrimRate 0.95 \
                    -maxDepth 1 \
                    -maxWeakCount 100

The classifier gets stuck at stage 2 after stage 1 got a full hit rate and a zero false alarm rate. I tried playing with the numPos, numNeg, and bt parameters, but it always gets approximately the same result. I also tried a HAAR classifier, it got a little further (stage 4) but it got stuck in the same way eventually.

===== TRAINING 0-stage =====
<BEGIN
POS count : consumed   88 : 88
NEG count : acceptanceRatio    500 : 1
Precalculation time: 1
+----+---------+---------+
|  N |    HR   |    FA   |
+----+---------+---------+
|   1|        1|        1|
+----+---------+---------+
|   2|        1|    0.046|
+----+---------+---------+
END>

===== TRAINING 1-stage =====
<BEGIN
POS count : consumed   88 : 88
NEG count : acceptanceRatio    500 : 0.0456038
Precalculation time: 0
+----+---------+---------+
|  N |    HR   |    FA   |
+----+---------+---------+
|   1|        1|        1|
+----+---------+---------+
|   2|        1|        0|
+----+---------+---------+
END>

===== TRAINING 2-stage =====
<BEGIN
POS count : consumed   88 : 88

I don't know enough about this but my guess is that its complete after stage 1, but it still trying to generate 18 more stages. My data set is quite simple, the positive should all be quite similar and the negative images are the same size.

  • Is it possible to have a classifier with only 2 stages? (stage 0 and stage 1)
  • Are LBP classifiers more tricky to get working than HAAR? Should I stick to HAAR since I am new at this?
  • Have I made any mistakes with the parameters or input data?
2013-10-15 07:27:11 -0600 received badge  Supporter (source)