Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

you probably won't like my answer, but:

 you should use transfer learning with a pretrained dnn, like the facenet one.

(believe me, i have been, where you are now, and i probably have seen it all, from lbp variants to rootsift to gabor filters, fishervectors, learned lpqdisc, CDIKP, to whatnot.)

and it would be fairly easy, too !

download https://raw.githubusercontent.com/pyannote/pyannote-data/master/openface.nn4.small2.v1.t7

(~30mb, but you won't regret it, i promise !)

#include <opencv2/dnn.hpp>

dnn::Net net = dnn::readNetFromTorch("openface.nn4.small2.v1.t7");

Mat process(const Mat &image)
{
    Mat inputBlob = dnn::blobFromImage(image, 1./255, Size(96,96), Scalar(), true, false);
    net.setInput(inputBlob);
    return net.forward();
}

then just feed (color!) images into it, and use those resulting (1 x 128 float) features, to train any ml of your choice.

in the end, it's just a giant, "fixed function feature processor".

you probably won't like my answer, but:

 you should use transfer learning with a pretrained dnn, like the facenet one.

(believe me, i have been, where you are now, and i probably have seen it all, from lbp variants to rootsift to gabor filters, fishervectors, learned lpqdisc, CDIKP, to whatnot.)

and it would be fairly easy, too !

download https://raw.githubusercontent.com/pyannote/pyannote-data/master/openface.nn4.small2.v1.t7

(~30mb, but you won't regret it, i promise !)

#include <opencv2/dnn.hpp>

dnn::Net net = dnn::readNetFromTorch("openface.nn4.small2.v1.t7");

Mat process(const Mat &image)
{
    Mat inputBlob = dnn::blobFromImage(image, 1./255, Size(96,96), Scalar(), true, false);
    net.setInput(inputBlob);
    return net.forward();
}

then just feed (color!) images into it, and use those resulting (1 x 128 float) features, to train any ml of your choice.

(you'll want some proper cropping(cascadeclassifier) and face-alignment (so the eyes are on a horizontal line) before that)

in the end, it's just a giant, "fixed function feature processor".

you probably won't like my answer, but:

 it isalmost 2018 now, you should use transfer learning with a pretrained dnn, like the facenet one.

(believe me, i have been, where you are now, and i probably have seen it all, from lbp variants to rootsift to gabor filters, fishervectors, learned lpqdisc, CDIKP, to whatnot.)

and it would be fairly easy, too !

download https://raw.githubusercontent.com/pyannote/pyannote-data/master/openface.nn4.small2.v1.t7

(~30mb, but you won't regret it, i promise !)

#include <opencv2/dnn.hpp>

dnn::Net net = dnn::readNetFromTorch("openface.nn4.small2.v1.t7");

Mat process(const Mat &image)
{
    Mat inputBlob = dnn::blobFromImage(image, 1./255, Size(96,96), Scalar(), true, false);
    net.setInput(inputBlob);
    return net.forward();
}

then just feed (color!) images into it, and use those resulting (1 x 128 float) features, to train any ml of your choice.

(you'll want some proper cropping(cascadeclassifier) and face-alignment (so the eyes are on a horizontal line) before that)

in the end, it's just a giant, "fixed function feature processor".

you probably won't like my answer, but:but, it is almost 2018 now:

 it isalmost 2018 now, you should use transfer learning with a pretrained dnn, like the facenet one.

(believe me, i have been, where you are now, and i probably have seen it all, from lbp variants to rootsift to gabor filters, fishervectors, learned lpqdisc, CDIKP, to whatnot.)

and it would be fairly easy, too !

download https://raw.githubusercontent.com/pyannote/pyannote-data/master/openface.nn4.small2.v1.t7

(~30mb, but you won't regret it, i promise !)

#include <opencv2/dnn.hpp>

dnn::Net net = dnn::readNetFromTorch("openface.nn4.small2.v1.t7");

Mat process(const Mat &image)
{
    Mat inputBlob = dnn::blobFromImage(image, 1./255, Size(96,96), Scalar(), true, false);
    net.setInput(inputBlob);
    return net.forward();
}

then just feed (color!) images into it, and use those resulting (1 x 128 float) features, to train any ml of your choice.

(you'll want some proper cropping(cascadeclassifier) and face-alignment (so the eyes are on a horizontal line) before that)

in the end, it's just a giant, "fixed function feature processor".

you probably won't like my answer, but, it is almost 2018 now:

 you should use transfer learning with a pretrained dnn, like the facenet one.

(believe me, i have been, where you are now, and i probably have seen it all, from lbp variants to rootsift to gabor filters, fishervectors, learned lpqdisc, CDIKP, to whatnot.)

and it would be fairly easy, too !

download https://raw.githubusercontent.com/pyannote/pyannote-data/master/openface.nn4.small2.v1.t7

(~30mb, but you won't regret it, i promise !)

#include <opencv2/dnn.hpp>

dnn::Net net = dnn::readNetFromTorch("openface.nn4.small2.v1.t7");

Mat process(const Mat &image)
{
    Mat inputBlob = dnn::blobFromImage(image, 1./255, Size(96,96), Scalar(), true, false);
    net.setInput(inputBlob);
    return net.forward();
}

then just feed (color!) images into it, and use those resulting (1 x 128 float) features, to train any ml of your choice.

(you'll want some proper cropping(cascadeclassifier) and face-alignment (so the eyes are on a horizontal line) before that)

in the end, it's just a giant, "fixed function feature processor".

( added a complete example here )

you probably won't like my answer, but, it is almost 2018 now:

 you should use transfer learning with a pretrained dnn, like the facenet one.

(believe me, i have been, where you are now, and i probably have seen it all, from lbp variants to rootsift to gabor filters, fishervectors, learned lpqdisc, CDIKP, to whatnot.)

and it would be fairly easy, too !

download https://raw.githubusercontent.com/pyannote/pyannote-data/master/openface.nn4.small2.v1.t7

(~30mb, but you won't regret it, i promise !)

#include <opencv2/dnn.hpp>

dnn::Net net = dnn::readNetFromTorch("openface.nn4.small2.v1.t7");

Mat process(const Mat &image)
{
    Mat inputBlob = dnn::blobFromImage(image, 1./255, Size(96,96), Scalar(), true, false);
    net.setInput(inputBlob);
    return net.forward();
}

then just feed (color!) images into it, and use those resulting (1 x 128 float) features, to train any ml of your choice.

(you'll want some proper cropping(cascadeclassifier) and face-alignment (so the eyes are on a horizontal line) before that)

in the end, it's just a giant, "fixed function feature processor".

( added a complete example here )

you can test it in your web browser tutorial_dnn_javascript