Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

alternatively, you could use opencv's new dnn module with a pretrained FaceNet model for face recognition:

dnn::Net net = dnn::readNetFromTorch("openface.nn4.small2.v1.t7");
// https://raw.githubusercontent.com/pyannote/pyannote-data/master/openface.nn4.small2.v1.t7

Mat inputBlob = dnn::blobFromImage(image, 1./255, Size(96,96), Scalar(), true, false);
net.setInput(inputBlob);
Mat feature = net.forward();

then, use those 128 float feature vectors, to simply compare images :

double distance = norm(feature1, feature2);

or, train your favourite ml or clustering algorithm using those.

alternatively, you could use opencv's new dnn module with a pretrained FaceNet model for face recognition:

dnn::Net net = dnn::readNetFromTorch("openface.nn4.small2.v1.t7");
// https://raw.githubusercontent.com/pyannote/pyannote-data/master/openface.nn4.small2.v1.t7

Mat inputBlob = dnn::blobFromImage(image, 1./255, Size(96,96), Scalar(), true, false);
net.setInput(inputBlob);
Mat feature = net.forward();
net.forward().clone();

then, use those 128 float feature vectors, to simply compare images :

double distance = norm(feature1, feature2);

or, train your favourite ml or clustering algorithm using those.

alternatively, you could use opencv's new dnn module with a pretrained FaceNet model for face recognition:

dnn::Net net = dnn::readNetFromTorch("openface.nn4.small2.v1.t7");
// https://raw.githubusercontent.com/pyannote/pyannote-data/master/openface.nn4.small2.v1.t7

Mat inputBlob = dnn::blobFromImage(image, 1./255, Size(96,96), Scalar(), true, false);
net.setInput(inputBlob);
Mat feature = net.forward().clone();

edit:

careful here, the output of net.forward() points to the last (internal) network blob, so we need a clone() to get distinctive results !

then, use those 128 float feature vectors, to simply compare images :

double distance = norm(feature1, feature2);

or, train your favourite ml or clustering algorithm using those.