openface python version has different results compared with translate c++ version
There are two versions of code, the first is the original python version from openface, the second one is c++ version translated by myself.
They are supposed to have the same result when using dlib landmarks detector, however the resulted landmark positions are slightly different. (same .dat file is used)
Here I only paste the simplified necessary parts.
1. Python version (copied from openface align-dlib.py, align_dlib.py)
rgb = imgObject.getRGB() def findLandmarks(self, rgb, bb): points = self.predictor(rgb, bb) def getBGR(self): bgr = cv2.imread(self.path) def getRGB(self): bgr = self.getBGR() rgb = cv2.cvtColor(bgr, cv2.COLOR_BGR2RGB)
Then the first two landmark positions are [[ 91. 112.] [ 91. 127.]...
2. translated c++ version
Mat mat_img = imread(argv[i]); Mat mat_img_rgb; cv::cvtColor( mat_img, mat_img_rgb, CV_BGR2RGB); cv_image<rgb_pixel> cv_img(mat_img_rgb); shape_predictor sp; full_object_detection shape = sp(cv_img, face);
The first two landmark positions are (90 112) (90 126)
I am guessing the reason I got the different result is that I converted Mat to cv_image, but I have to do this to use dlib detector,
the shape_predictor is actually working on grayscale images, so just convert to grayscale (not rgb), and use
cv_image<uchar>
are you using lossy jpeg images ? (don't !!)
Thanks for your comments
even shape_predictor actually takes grayscale images, I do see many many cases in which the input is just RGB/GBR 3 channels image.
it shouldn't matter whether input is lossy or not, since these two versions of code are taking the same picture as input.
2 . it does matter. the same (jpeg) image will get uncompressed differently, using different programs/libs.
How can I force these two programs to use the same jpeg decoder?
I do like the fact @berak is willing to help you out on this, but you are aware that this an OpenCV forum and not an openface forum? :P