Ask Your Question

Danst's profile - activity

2016-01-12 11:21:38 -0600 received badge  Enthusiast
2015-05-05 08:57:55 -0600 commented question Issue with cascade classifier

unfortunately the images are confidential so I can't share them...

2015-05-05 08:24:02 -0600 asked a question Issue with cascade classifier

I am using detectMultiScale to process two images A and B. When I process A alone, there are no detections, however when I process A after B there is one detection.

CascadeClassifier classifier("classifiers/haarcascade_frontalface_alt.xml");

Mat imageA, imageB;
vector<Rect> face_detectionsA_beforeB, face_detectionsA_afterB, face_detectionsB;

string fileA = "A.jpg";
string fileB = "B.jpg";

imageA = imread(fileA, 1);
imageB = imread(fileB, 1);

classifier.detectMultiScale(imageA, face_detectionsA_beforeB, 1.2, 1, CASCADE_FIND_BIGGEST_OBJECT, Size(64, 64)); // 0 faces detected
classifier.detectMultiScale(imageB, face_detectionsB, 1.2, 1, CASCADE_FIND_BIGGEST_OBJECT, Size(64, 64)); // 2 faces detected
classifier.detectMultiScale(imageA, face_detectionsA_afterB, 1.2, 1, CASCADE_FIND_BIGGEST_OBJECT, Size(64, 64)); // 1 face detected

It is like the output of the current image depended on the output of the previous one, which doesn't make much sense to me... I don't have this problem (i.e., I don't get any detection when I process A after B) if I replace image B by other images, or if I load the classifier every time I use the detectMultiScale function.

Any ideas about what could be happening?

2015-03-10 04:53:03 -0600 received badge  Student (source)
2015-03-09 11:19:42 -0600 asked a question SIFT feature descriptor implementation

According to Lowe's paper about the original SIFT algorithm, a feature descriptor consisting of 4 x 4 orientation histograms is calculated from a 16 x 16 window. The scale of the descriptor is only used to select the level of gaussian blur for the image.

Looking at the OpenCV implementation, this doesn't seem to be the case. In calcSIFTDescriptor there is the following code to calculate the histograms:

for( k = 0; k < len; k++ )
{
  // histogram update
}

Where len is the number of samples used. According to Lowe's algorithm, this should always be 256 (16 x 16), shouln't be? In OpenCV implementation the len depends on the scale of the descriptor.

Could someone clarify this?

Thanks