Ask Your Question

Revision history [back]

How to detect daces using variable size images

I have been able to detect faces using the haarcascade classifier as long as the width and height of each image is the same (e.g., 500x500). For example, I have a photo with the original dimensions of 781x1277 and the programme is unable to detect the face in it. When I resized and copied the image into 500x500 and 900x900 images, I was able to detect the face in the two new images. Is it a requirement to have the images read be squared in size?

Here is the code:

static cv::String face_cascade_name = "haarcascade_frontalface_alt.xml";
static cv::CascadeClassifier face_cascade(face_cascade_name);
...
if (!face_cascade.load(face_cascade_name))
{
// Error - stop
}
...
// get the pixels from the WriteableBitmap
byte* pPixels = GetPointerToPixelData(writableBitmap->PixelBuffer);
int height = writableBitmap->PixelHeight;
int width = writableBitmap->PixelWidth;
// create a matrix the size and type of the image
cv::Mat mat(width, height, CV_8UC4);
memcpy(mat.data, pPixels, 4 * height*width);
// convert to grayscale
cv::Mat intermediateMat;
cv::cvtColor(mat, intermediateMat, CV_RGB2GRAY);
std::vector<cv::Rect> faces;
equalizeHist(intermediateMat, intermediateMat);
// Detect faces
// face_cascade.detectMultiScale(intermediateMat, faces, 1.05, 6, 0, cv::Size(20, 20), cv::Size(300, 300));
face_cascade.detectMultiScale(intermediateMat, faces);
for (size_t i = 0; i < faces.size(); i++)
{
    cv::Point center(faces[i].x + faces[i].width / 2, faces[i].y + faces[i].height / 2);
    cv::ellipse(mat, center, cv::Size(faces[i].width / 2, faces[i].height / 2), 0, 0, 360, cv::Scalar(255, 0, 255), 2, 8, 0);
}

Any insight would be greatly appreciated.

Thanks.